2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)最新文献

筛选
英文 中文
Multi-resolution deblurring 多分辨率由模糊变清晰
2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041901
Michel McLaughlin, En-Ui Lin, Erik Blasch, A. Bubalo, Maria Scalzo-Cornacchia, M. Alford, M. Thomas
{"title":"Multi-resolution deblurring","authors":"Michel McLaughlin, En-Ui Lin, Erik Blasch, A. Bubalo, Maria Scalzo-Cornacchia, M. Alford, M. Thomas","doi":"10.1109/AIPR.2014.7041901","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041901","url":null,"abstract":"As technology advances; blur in an image remains as an ever-present issue in the image processing field. A blurred image is mathematically expressed as a convolution of a blur function with a sharp image, plus noise. Removing blur from an image has been widely researched and is still important as new images are collected. Without a reference image, identifying, measuring, and removing blur from a given image is very challenging. Deblurring involves estimating the blur kernel to match with various types of blur including camera motion/de focus or object motion. Various blur kernels have been studied over many years, but the most common function is the Gaussian. Once the blur kernel (function) is estimated, a deconvolution is performed with the kernel and the blurred image. Many existing methods operate in this manner, however, these methods remove blur from the blurred region, but alter the un-blurred regions of the image. Pixel alteration is due to the actual intensity values of the pixels in the image becoming easily distorted while being used in the deblurring process. The method proposed in this paper uses multi-resolution analysis (MRA) techniques to separate blur, edge, and noise coefficients. Deconvolution with the estimated blur kernel is then performed on these coefficients instead of the actual pixel intensity values before reconstructing the image. Additional steps will be taken to retain the quality of un-blurred regions of the blurred image. Experimental results on simulated and real data show that our approach achieves higher quality results than previous approaches on various blurry and noise images using several metrics including mutual information and structural similarity based metrics.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113980247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Automatic segmentation of carcinoma in radiographs x线影像中癌的自动分割
2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041904
Fatema A. Albalooshi, Sara Smith, Yakov Diskin, P. Sidike, V. Asari
{"title":"Automatic segmentation of carcinoma in radiographs","authors":"Fatema A. Albalooshi, Sara Smith, Yakov Diskin, P. Sidike, V. Asari","doi":"10.1109/AIPR.2014.7041904","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041904","url":null,"abstract":"A strong emphasis has been made on making the healthcare system and the diagnostic procedure more efficient. In this paper, we present an automatic detection technique designed to segment out abnormalities in X-ray imagery. Utilizing the proposed algorithm allows radiologists and their assistants to more effectively sort and analyze large amount of imagery. In radiology, X-ray beams are used to detect various densities within a tissue and to display accompanying anatomical and architectural distortion. Lesion localization within fibrous or dense tissue is complicated by a lack of clear visualization as compared to tissues with an increased fat distribution. As a result, carcinoma and its associated unique patterns can often be overlooked within dense tissue. We introduce a new segmentation technique that integrates prior knowledge, such as intensity level, color distribution, texture, gradient, and shape of the region of interest taken from prior data, within segmentation framework to enhance performance of region and boundary extraction of defected tissue regions in medical imagery. Prior knowledge of the intensity of the region of interest can be extremely helpful in guiding the segmentation process, especially when the carcinoma boundaries are not well defined and when the image contains non-homogeneous intensity variations. We evaluate our algorithm by comparing our detection results to the results of the manually segmented regions of interest. Through metrics, we also illustrate the effectiveness and accuracy of the algorithm in improving the diagnostic efficiency for medical experts.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130557973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
An automated workflow for observing track data in 3-dimensional geo-accurate environments 用于在三维地理精确环境中观察轨迹数据的自动化工作流程
2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041895
D. Walvoord, Andrew C. Blose, B. Brower
{"title":"An automated workflow for observing track data in 3-dimensional geo-accurate environments","authors":"D. Walvoord, Andrew C. Blose, B. Brower","doi":"10.1109/AIPR.2014.7041895","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041895","url":null,"abstract":"Recent developments in computing capabilities and persistent surveillance systems have enabled advanced analytics and visualization of image data. Using our existing capabilities, this work focuses on developing a unified approach to address the task of visualizing track data in 3-dimensional environments. Our current structure from motion (SfM) workflow is reviewed to highlight our point cloud generation methodology, which offers the option to use available sensor telemetry to improve performance. To this point, an algorithm outline for navigation-guided feature matching and geo-rectification in the absence of ground control points (GCPs) is included in our discussion. We then provide a brief overview of our onboard processing suite, which includes real-time mosaic generation, image stabilization, and feature tracking. Exploitation of geometry refinements, inherent to the SfM workflow, is then discussed in the context of projecting track data into the point cloud environment for advanced visualization. Results using the new Exelis airborne collection system, Corvus Eye, are provided to discuss conclusions and areas for future work.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127346418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D sparse point reconstructions of atmospheric nuclear detonations 大气核爆炸的三维稀疏点重建
2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041938
Robert C. Slaughter, J. McClory, Daniel T. Schmitt, M. Sambora, K. Walli
{"title":"3D sparse point reconstructions of atmospheric nuclear detonations","authors":"Robert C. Slaughter, J. McClory, Daniel T. Schmitt, M. Sambora, K. Walli","doi":"10.1109/AIPR.2014.7041938","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041938","url":null,"abstract":"Researchers at Lawrence Livermore National Laboratory (LLNL) have started digitizing technical films spanning the above ground atmospheric nuclear testing operations conducted by the United States from 1950 through the 1960s. This technical film test data represents unique information that can be use as a primary validation data source for nuclear effects codes that are used by national researchers for assessments on nuclear force management, nuclear detection and reporting, and nuclear forensics mission areas. Researchers at the Air Force Institute of Technology (AFIT) have begun employing modern digital image processing and computer vision techniques to exploit this data set and determine specific invariant features of the early dynamic fireball growth. The focus of this paper is to introduce the methodology used for three dimensional sparse reconstructions of nuclear fireballs. Also discussed are the the difficulties associated with the technique.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114942679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Depth data assisted structure-from-motion parameter optimization and feature track correction 深度数据辅助结构-运动参数优化和特征轨迹校正
2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041930
S. Recker, C. Gribble, Mikhail M. Shashkov, Mario Yepez, Mauricio Hess-Flores, K. Joy
{"title":"Depth data assisted structure-from-motion parameter optimization and feature track correction","authors":"S. Recker, C. Gribble, Mikhail M. Shashkov, Mario Yepez, Mauricio Hess-Flores, K. Joy","doi":"10.1109/AIPR.2014.7041930","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041930","url":null,"abstract":"Structure-from-Motion (SfM) applications attempt to reconstruct the three-dimensional (3D) geometry of an underlying scene from a collection of images, taken from various camera viewpoints. Traditional optimization techniques in SfM, which compute and refine camera poses and 3D structure, rely only on feature tracks, or sets of corresponding pixels, generated from color (RGB) images. With the abundance of reliable depth sensor information, these optimization procedures can be augmented to increase the accuracy of reconstruction. This paper presents a general cost function, which evaluates the quality of a reconstruction based upon a previously established angular cost function and depth data estimates. The cost function takes into account two error measures: first, the angular error between each computed 3D scene point and its corresponding feature track location, and second, the difference between the sensor depth value and its computed estimate. A bundle adjustment parameter optimization is implemented using the proposed cost function and evaluated for accuracy and performance. As opposed to traditional bundle adjustment, in the event of feature tracking errors, a corrective routine is also present to detect and correct inaccurate feature tracks. The filtering algorithm involves clustering depth estimates of the same scene point and observing the difference between the depth point estimates and the triangulated 3D point. Results on both real and synthetic data are presented and show that reconstruction accuracy is improved.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129397593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Mobile ISR: Intelligent ISR management and exploitation for the expeditionary warfighter 移动ISR:面向远征作战人员的智能ISR管理和开发
2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041918
Donald Madden, T. Choe, Hongli Deng, Kiran Gunda, H. Gupta, N. Ramanathan, Z. Rasheed, E. Shayne, Asaad Hakeem
{"title":"Mobile ISR: Intelligent ISR management and exploitation for the expeditionary warfighter","authors":"Donald Madden, T. Choe, Hongli Deng, Kiran Gunda, H. Gupta, N. Ramanathan, Z. Rasheed, E. Shayne, Asaad Hakeem","doi":"10.1109/AIPR.2014.7041918","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041918","url":null,"abstract":"Modern warfighters are informed by an expanding variety of Intelligence, Surveillance and Reconnaissance (ISR) sources, but the timely exploitation of this data poses a significant challenge. ObjectVideo (\"OV\") presents a system, Mobile ISR to facilitate ISR knowledge discovery for expeditionary warfighters. The aim is to collect, manage, and deliver time-critical information when and where it is needed most. The Mobile ISR system consumes video, still imagery, and target metadata from airborne, ground-based, and hand-held sensors, and indexes that data based on content using state-of-the-art video analytics and user tagging. The data is stored in a geospatial database and disseminated to warfighters according to their mission context and current activity. The warfighters use an Android mobile application to view this data in the context of an interactive map or augmented reality display, and to capture their own imagery and video. A complex event processing engine enables powerful queries to the knowledge base. The system leverages the extended DoD Discovery Metadata Specification (DDMS) card format, with extensions to include representation of entities, activities, and relationships.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131251533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A container-based elastic cloud architecture for real-time full-motion video (FMV) target tracking 一种基于容器的实时全动态视频(FMV)目标跟踪弹性云架构
2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041896
Ryan Wu, Yu Chen, Erik Blasch, Bingwei Liu, Genshe Chen, Dan Shen
{"title":"A container-based elastic cloud architecture for real-time full-motion video (FMV) target tracking","authors":"Ryan Wu, Yu Chen, Erik Blasch, Bingwei Liu, Genshe Chen, Dan Shen","doi":"10.1109/AIPR.2014.7041896","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041896","url":null,"abstract":"Full-motion video (FMV) target tracking requires the objects of interest be detected in a continuous video stream. Maintaining a stable track can be challenging as target attributes change over time, frame-rates can vary, and image alignment errors may drift. As such, optimizing FMV target tracking performance to address dynamic scenarios is critical. Many target tracking algorithms do not take advantage of parallelism due to dependencies on previous estimates which results in idle computation resources when waiting for such dependencies to resolve. To address this problem, a container-based virtualization technology is adopted to make more efficient use of computing resources for achieving an elastic information fusion cloud. In this paper, we leverage the benefits provided by container-based virtualization to optimize an FMV target tracking application. Using OpenVZ as the virtualization platform, we parallelize video processing by distributing incoming frames across multiple containers. A concurrent container partitions video stream into frames and then resembles processed frames into video output. We implement a system that dynamically allocates VE computing resources to match frame production and consumption between VEs. The experimental results verify the viability of container-based virtualization for improving FMV target tracking performance and demostrates a solution for mission-critical information fusion tasks.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133448641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Extension of no-reference deblurring methods through image fusion 通过图像融合扩展无参考去模糊方法
2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041905
M. Ferris, Erik P. Blasen, Michel McLaughlin
{"title":"Extension of no-reference deblurring methods through image fusion","authors":"M. Ferris, Erik P. Blasen, Michel McLaughlin","doi":"10.1109/AIPR.2014.7041905","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041905","url":null,"abstract":"Extracting an optimal amount of information from a blurred image without a reference image for comparison is an pressing issue in image quality enhancement. Most studies have approached deblurring by using iterative algorithms in an attempt to deconvolve the blurred image into the ideal image. Deconvolution is difficult due to the need to estimate a point spread function for the blur after each iteration, which can be computationally expensive for many iterations which often causes some amount of distortion or \"ringing\" in the deblurred image. However, image fusion may provide a solution. By deblurring a no-reference image, then fusing it with the blurred image, it is possible to extract additional salient information from the fused image; however the deblurring process causes some degree of information loss. The act of fixing one section of the image can cause distortion in another section of the image. Hence, by fusing the blurred and deblurred images together, it is critical to retain important information from the blurred image and reduce the \"ringing\" in the deblurred image. To evaluate the fusion process, three different evaluation metrics are used: Mutual Information (MI), Mean Square Error (MSE), and Peak Signal to Noise Ratio (PSNR). This paper details an extension of the no-reference image deblurring process and the initial results indicate that image fusion has the potential to be a useful tool in recovering information in a blurred image.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114451796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Novel geometric coordination registration in cone-beam computed Tomogram 锥束计算机层析成像中的新型几何配位
2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041922
W. Y. Lam, Henry Y. T. Ngan, P. Wat, H. Luk, E. Pow, T. Goto
{"title":"Novel geometric coordination registration in cone-beam computed Tomogram","authors":"W. Y. Lam, Henry Y. T. Ngan, P. Wat, H. Luk, E. Pow, T. Goto","doi":"10.1109/AIPR.2014.7041922","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041922","url":null,"abstract":"The use of cone-beam computed tomography (CBCT) in medical field can help the clinicians to visualize the hard tissues in head and neck region via a cylindrical field of view (FOV). The images are usually presented with reconstructed three-dimensional (3D) imaging and its orthogonal (x-, y- and z-planes) images. Spatial relationship of the structures in these orthogonal views is important for diagnosis of diseases as well as planning for treatment. However, the non-standardized positioning of the object during the CBCT data acquisition often induces errors in measurement since orthogonal images cut at different planes might look similar. In order to solve the problem, this paper proposes an effective mapping from the Cartesian coordinates of a cube physically to its respective coordinates in 3D imaging. Therefore, the object (real physical domain) and the imaging (computerized virtual domain) can be linked up and registered. In this way, the geometric coordination of the object/imaging can be defined and its orthogonal images would be fixed on defined planes. The images can then be measured with vector information and serial imagings can also be directly compared.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117333543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Range invariant anomaly detection for LWIR polarimetric imagery LWIR偏振图像距离不变异常检测
2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) Pub Date : 2014-10-01 DOI: 10.1109/AIPR.2014.7041931
J. Romano, D. Rosario
{"title":"Range invariant anomaly detection for LWIR polarimetric imagery","authors":"J. Romano, D. Rosario","doi":"10.1109/AIPR.2014.7041931","DOIUrl":"https://doi.org/10.1109/AIPR.2014.7041931","url":null,"abstract":"In this paper we present a modified version of a previously proposed anomaly detector for polarimetric imagery. This modified version is a more adaptive, range invariant anomaly detector based on the covariance difference test, the M-Box. The paper demonstrates the underlying issue of range to target dependency of the previous algorithm and offers a solution that is very easily implemented with the M-Box covariance test. Results are shown where the new algorithm is capable of identifying manmade objects as anomalies in both close and long range scenarios.","PeriodicalId":210982,"journal":{"name":"2014 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123292188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信