2019 IEEE International Conference on Imaging Systems and Techniques (IST)最新文献

筛选
英文 中文
Identifying Asthma genetic signature patterns by mining Gene Expression BIG Datasets using Image Filtering Algorithms 通过使用图像滤波算法挖掘基因表达大数据集识别哮喘遗传特征模式
2019 IEEE International Conference on Imaging Systems and Techniques (IST) Pub Date : 2019-12-01 DOI: 10.1109/IST48021.2019.9010412
M. Hachim, B. Mahboub, Q. Hamid, R. Hamoudi
{"title":"Identifying Asthma genetic signature patterns by mining Gene Expression BIG Datasets using Image Filtering Algorithms","authors":"M. Hachim, B. Mahboub, Q. Hamid, R. Hamoudi","doi":"10.1109/IST48021.2019.9010412","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010412","url":null,"abstract":"Asthma is a treatable but incurable chronic inflammatory disease affecting more than 14% of the UAE population. Asthma is still a clinical dilemma as there is no proper clinical definition of asthma, unknown definitive underlying mechanisms, no objective prognostic tool nor bedside noninvasive diagnostic test to predict complication or exacerbation. Big Data in the form of publicly available transcriptomics can be a valuable source to decipher complex diseases like asthma. Such an approach is hindered by technical variations between different studies that may mask the real biological variations and meaningful, robust findings. A large number of datasets of gene expression microarray images need a powerful tool to properly translate the image intensities into truly differential expressed genes between conditioned examined from the noise. Here we used a novel bioinformatic method based on the coefficient of variance to filter nonvariant probes with stringent image analysis processing between asthmatic and healthy to increase the power of identifying accurate signals hidden within the heterogeneous nature of asthma. Our analysis identified important signaling pathways members, namely NFKB and TGFB pathways, to be differentially expressed between severe asthma and healthy controls. Those vital pathways represent potential targets for future asthma treatment and can serve as reliable biomarkers for asthma severity. Proper image analysis for the publicly available microarray transcriptomics data increased its usefulness to decipher asthma and identify genuine differentially expressed genes that can be validated across different datasets.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114610502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Deep Learning-Based Approach for Accurate Segmentation of Bladder Wall using MR Images 基于深度学习的MR图像膀胱壁精确分割方法
2019 IEEE International Conference on Imaging Systems and Techniques (IST) Pub Date : 2019-12-01 DOI: 10.1109/IST48021.2019.9010233
K. Hammouda, A. El-Baz, F. Khalifa, A. Soliman, M. Ghazal, M. A. El-Ghar, A. Haddad, Mohammed M Elmogy, H. Darwish, R. Keynton
{"title":"A Deep Learning-Based Approach for Accurate Segmentation of Bladder Wall using MR Images","authors":"K. Hammouda, A. El-Baz, F. Khalifa, A. Soliman, M. Ghazal, M. A. El-Ghar, A. Haddad, Mohammed M Elmogy, H. Darwish, R. Keynton","doi":"10.1109/IST48021.2019.9010233","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010233","url":null,"abstract":"In this paper, a deep learning-based convolution neural network (CNN) is developed for accurate segmentation of the bladder wall using T2-weighted magnetic resonance imaging (T2W-MRI). Our framework utilizes a dual pathway, two-dimensional CNN for pathological bladder segmentation. Due to large bladder shape variability across subjects and the existence of pathology, a learnable adaptive shape prior (ASP) model is incorporated into our framework. To obtain the goal regions, the neural network fuses the MR image data for the first pathway, and the estimated ASP model for the second pathway. To remove noisy and scattered predictions, the CNN soft output is refined using a fully connected conditional random field (CRF). Our pipeline has been tested and evaluated using a leave-one-subject-out approach (LOSO) on twenty MRI data sets. Our framework achieved accurate segmentation results for the bladder wall and tumor as documented by the Dice similarity coefficient (DSC) and Hausdorff distance (HD). Moreover, comparative results against other segmentation approaches documented the superiority of our framework to provide accurate results for pathological bladder wall segmentation.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"160 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116236854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Cross-Spectral Periocular Recognition by Cascaded Spectral Image Transformation 级联光谱图像变换的跨光谱眼周识别
2019 IEEE International Conference on Imaging Systems and Techniques (IST) Pub Date : 2019-12-01 DOI: 10.1109/IST48021.2019.9010520
K. Raja, N. Damer, Raghavendra Ramachandra, F. Boutros, C. Busch
{"title":"Cross-Spectral Periocular Recognition by Cascaded Spectral Image Transformation","authors":"K. Raja, N. Damer, Raghavendra Ramachandra, F. Boutros, C. Busch","doi":"10.1109/IST48021.2019.9010520","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010520","url":null,"abstract":"Recent efforts in biometrics have focused on cross-domain face recognition where images from one domain are either transformed or synthesized. In this work, we focus on a similar problem for cross spectral periocular recognition where the images from Near Infra Red (NIR) domain are matched against Visible (VIS) spectrum images. Specifically, we propose to adapt a cascaded image transformation network that can produce NIR image given a VIS image. The proposed approach is first validated with regards to the quality of the image produced by employing various quality factors. Second the applicability is demonstrated with images generated by the proposed approach. We employ a publicly available cross-spectral periocular image data of 240 unique periocular instances captured in 8 different capture sessions. We experimentally validate that the proposed image transformation scheme can produce NIR like images and also can be used with any existing feature extraction scheme. To this extent, we demonstrate the biometric applicability by using both hand-crafted and deep neural network based features under verification setting. The obtained EER of 0.7% indicates the suitability of proposed approach for image transformation from the VIS to the NIR domain.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123782133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
A Deconvolutional Bottom-up Deep Network for multi-person pose estimation 基于反卷积自底向上深度网络的多人姿态估计
2019 IEEE International Conference on Imaging Systems and Techniques (IST) Pub Date : 2019-12-01 DOI: 10.1109/IST48021.2019.9010189
Meng Li, Haoqian Wang, Yongbing Zhang, Yi Yang
{"title":"A Deconvolutional Bottom-up Deep Network for multi-person pose estimation","authors":"Meng Li, Haoqian Wang, Yongbing Zhang, Yi Yang","doi":"10.1109/IST48021.2019.9010189","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010189","url":null,"abstract":"Due to the trade off between model complexity and estimation accuracy, current human pose estimators either are of low accuracy or requires long running time. Such dilemma is especially severe in real time multi-person pose estimation. To address this issue, we design a deep network of reduced parameter size and high estimation accuracy, via introducing deconvolution layers instead of widely used fully-connected configuration. Specifically, our model consists of two successive parts: Detection network and matching network. The former outputs keypoint heatmap and person locations, and then the latter produces the final pose estimation using multiple deconvolutional layers. Benefiting from the simple structure and explicit utilization of previously neglected spatial structure in heatmap, the matching network is of specially high efficiency and at high precision. Experiments on the challenging COCO dataset demonstrate our method can almost cut off the running parameters of matching network, while achieving higher accuracy than existing methods.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"206 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125738371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Calibration of Dual-LiDARs Using Two Poles Stickered with Retro-Reflective Tape 双激光雷达两极贴反光带的自动校准
2019 IEEE International Conference on Imaging Systems and Techniques (IST) Pub Date : 2019-11-02 DOI: 10.1109/IST48021.2019.9010134
Bohuan Xue, Jianhao Jiao, Yilong Zhu, Linwei Zheng, Dong Han, Ming Liu, Rui Fan
{"title":"Automatic Calibration of Dual-LiDARs Using Two Poles Stickered with Retro-Reflective Tape","authors":"Bohuan Xue, Jianhao Jiao, Yilong Zhu, Linwei Zheng, Dong Han, Ming Liu, Rui Fan","doi":"10.1109/IST48021.2019.9010134","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010134","url":null,"abstract":"Multi-LiDAR systems have been prevalently applied in modern autonomous vehicles to render a broad view of the environments. The rapid development of 5G wireless technologies has brought a breakthrough for current cellular vehicle-to-everything (C-V2X) applications. Therefore, a novel localization and perception system in which multiple LiDARs are mounted around cities for autonomous vehicles has been proposed. However, the existing calibration methods require specific hard-to-move markers, ego-motion, or good initial values given by users. In this paper, we present a novel approach that enables automatic multi-LiDAR calibration using two poles stickered with retro-reflective tape. This method does not depend on prior environmental information, initial values of the extrinsic parameters, or movable platforms like a car. We analyze the LiDAR-pole model, verify the feasibility of the algorithm through simulation data, and present a simple method to measure the calibration errors w.r.t the ground truth. Experimental results demonstrate that our approach gains better flexibility and higher accuracy when compared with the state-of-the-art approach.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122030017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Sliding Mode Control based Support Vector Machine RBF Kernel Parameter Optimization 基于支持向量机RBF核参数优化的滑模控制
2019 IEEE International Conference on Imaging Systems and Techniques (IST) Pub Date : 2019-11-01 DOI: 10.1109/IST48021.2019.9010479
Maryam Yalsavar, P. Karimaghaee, Akbar Sheikh-Akbari, J. Dehmeshki, M. Khooban, Salah Al-Majeed
{"title":"Sliding Mode Control based Support Vector Machine RBF Kernel Parameter Optimization","authors":"Maryam Yalsavar, P. Karimaghaee, Akbar Sheikh-Akbari, J. Dehmeshki, M. Khooban, Salah Al-Majeed","doi":"10.1109/IST48021.2019.9010479","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010479","url":null,"abstract":"Support Vector Machine (SVM) is a learning-based algorithm, which is widely used for classification in many applications. Despite its advantages, its application to large scale datasets is limited due to its use of large number of support vectors and dependency of its performance on its kernel parameter. This paper presents a Sliding Mode Control based Support Vector Machine Radial Basis Function's kernel parameter optimization (SMC-SVM-RBF) method, inspired by sliding mode closed loop control theory, which has demonstrated significantly higher performance to that of the standard closed loop control technique. The proposed method first defines an error equation and a sliding surface and then iteratively updates the RBF's kernel parameter based on the sliding mode control theory, forcing SVM training error to converge below a predefined threshold value. The closed loop nature of the proposed algorithm increases the robustness of the technique to uncertainty and improves its convergence speed. Experimental results were generated using nine standard benchmark datasets covering wide range of applications. Results show the proposed SMC-SVM-RBF method is significantly faster than those of classical SVM based techniques. Moreover, it generates more accurate results than most of the state of the art SVM based methods.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130139401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Robust Pavement Mapping System Based on Normal-Constrained Stereo Visual Odometry 基于法向约束立体视觉里程计的鲁棒路面映射系统
2019 IEEE International Conference on Imaging Systems and Techniques (IST) Pub Date : 2019-10-29 DOI: 10.1109/IST48021.2019.9010439
Huaiyang Huang, Rui Fan, Yilong Zhu, Ming Liu, I. Pitas
{"title":"A Robust Pavement Mapping System Based on Normal-Constrained Stereo Visual Odometry","authors":"Huaiyang Huang, Rui Fan, Yilong Zhu, Ming Liu, I. Pitas","doi":"10.1109/IST48021.2019.9010439","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010439","url":null,"abstract":"Pavement condition is crucial for civil infrastructure maintenance. This task usually requires efficient road damage localization, which can be accomplished by the visual odometry system embedded in unmanned aerial vehicles (UAVs), However, the state-of-the-art visual odometry and mapping methods suffer from large drift under the degeneration of the scene structure. To alleviate this issue, we integrate normal constraints into the visual odometry process, which greatly helps to avoid large drift. By parameterizing the normal vector on the tangential plane, the normal factors are coupled with traditional reprojection factors in the pose optimization procedure. The experimental results demonstrate the effectiveness of the proposed system. The overall absolute trajectory error is improved by approximately 20%, which indicates that the estimated trajectory is much more accurate than that obtained using other state-of-the-art methods.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"471 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122195937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Autonomous UAV Landing System Based on Visual Navigation 基于视觉导航的无人机自主着陆系统
2019 IEEE International Conference on Imaging Systems and Techniques (IST) Pub Date : 2019-10-29 DOI: 10.1109/IST48021.2019.9010264
Zhixin Wu, Peng Han, Ruiwen Yao, Lei Qiao, Weidong Zhang, T. Shen, Min Sun, Yilong Zhu, Ming Liu, Rui Fan
{"title":"Autonomous UAV Landing System Based on Visual Navigation","authors":"Zhixin Wu, Peng Han, Ruiwen Yao, Lei Qiao, Weidong Zhang, T. Shen, Min Sun, Yilong Zhu, Ming Liu, Rui Fan","doi":"10.1109/IST48021.2019.9010264","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010264","url":null,"abstract":"In this paper, we present an autonomous unmanned aerial vehicle (UAV) landing system based on visual navigation. We design the landmark as a topological pattern in order to enable the UAV to distinguish the landmark from the environment easily. In addition, a dynamic thresholding method is developed for image binarization to improve detection efficiency. The relative distance in the horizontal plane is calculated according to effective image information, and the relative height is obtained using a linear interpolation method. The landing experiments are performed on a static and a moving platform, respectively. The experimental results illustrate that our proposed landing system performs robustly and accurately.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126263332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
PT-ResNet: Perspective Transformation-Based Residual Network for Semantic Road Image Segmentation PT-ResNet:基于视角变换的残差网络语义道路图像分割
2019 IEEE International Conference on Imaging Systems and Techniques (IST) Pub Date : 2019-10-29 DOI: 10.1109/IST48021.2019.9010501
Rui Fan, Yuan Wang, Lei Qiao, Ruiwen Yao, Peng Han, Weidong Zhang, I. Pitas, Ming Liu
{"title":"PT-ResNet: Perspective Transformation-Based Residual Network for Semantic Road Image Segmentation","authors":"Rui Fan, Yuan Wang, Lei Qiao, Ruiwen Yao, Peng Han, Weidong Zhang, I. Pitas, Ming Liu","doi":"10.1109/IST48021.2019.9010501","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010501","url":null,"abstract":"Semantic road region segmentation is a high-level task, which paves the way towards road scene understanding. This paper presents a residual network trained for semantic road segmentation. Firstly, we represent the projections of road disparities in the v-disparity map as a linear model, which can be estimated by optimizing the v-disparity map using dynamic programming. This linear model is then utilized to reduce the redundant information in the left and right road images. The right image is also transformed into the left perspective view, which greatly enhances the road surface similarity between the two images. Finally, the processed stereo images and their disparity maps are concatenated to create a set of 3D images, which are then utilized to train our neural network. The experimental results illustrate that our network achieves a maximum F1-measure of approximately 91.19%, when analyzing the images from the KITTI road dataset.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129879997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Real-Time, Environmentally-Robust 3D LiDAR Localization 实时、环境健壮的3D激光雷达定位
2019 IEEE International Conference on Imaging Systems and Techniques (IST) Pub Date : 2019-10-28 DOI: 10.1109/IST48021.2019.9010305
Yilong Zhu, Bohuan Xue, Linwei Zheng, Huaiyang Huang, Ming Liu, Rui Fan
{"title":"Real-Time, Environmentally-Robust 3D LiDAR Localization","authors":"Yilong Zhu, Bohuan Xue, Linwei Zheng, Huaiyang Huang, Ming Liu, Rui Fan","doi":"10.1109/IST48021.2019.9010305","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010305","url":null,"abstract":"Localization, or position fixing, is an important problem in robotics research. In this paper, we propose a novel approach for long-term localization in a changing environment using 3D LiDAR. We first create the map of a real environment using GPS and LiDAR. Then, we divide the map into several small parts as the targets for cloud registration, which can not only improve the robustness but also reduce the registration time. We proposed a localization method called PointLocalization. PointLocalization allows us to fuse different kinds of odometers, which can optimize the accuracy and frequency of localization results. We evaluate our algorithm on an unmanned ground vehicle (UGV) using LiDAR and a wheel encoder, and obtain the localization results at more than 20 Hz after fusion. The algorithm can also localize the UGV in a 180-degree field of view (FOV). Using an outdated map captured six months ago, this algorithm shows great robustness, and the test results show that it can achieve an accuracy of 10 cm. PointLocalization has been tested for a period of more than six months in a crowded factory and has operated successfully over a distance of more than 2000 km.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121610400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信