Medical Imaging: Image-Guided Procedures最新文献

筛选
英文 中文
Automatic brain structure-guided registration of pre and intra-operative 3D ultrasound for neurosurgery 神经外科术前和术中三维超声脑结构自动定位
Medical Imaging: Image-Guided Procedures Pub Date : 2020-03-16 DOI: 10.1117/12.2549630
S. Ghose, David M. Mills, J. Mitra, L. Smith, D. Yeo, A. Golby, Sarah F. Frisken, Thomas K. Foo
{"title":"Automatic brain structure-guided registration of pre and intra-operative 3D ultrasound for neurosurgery","authors":"S. Ghose, David M. Mills, J. Mitra, L. Smith, D. Yeo, A. Golby, Sarah F. Frisken, Thomas K. Foo","doi":"10.1117/12.2549630","DOIUrl":"https://doi.org/10.1117/12.2549630","url":null,"abstract":"Image guidance aids neurosurgeons in making critical clinical decisions of safe maximal resection of diseased tissue. The brain however undergoes significant non-linear structural deformation on account of dura opening and tumor resection. Deformable registration of pre-operative ultrasound to intra-operative ultrasound may be used in mapping of pre-operative planning MRI to intraoperative ultrasound. Such mapping may aid in determining tumor resection margins during surgery. In this work, brain structures visible in pre- and intra-operative 3D ultrasound were used for automatic deformable registration. A Gaussian mixture model was used to automatically segment structures of interest in pre- and intra-operative ultrasound and patch-based normalized cross-correlation was used to establish correspondences between segmented structures. An affine registration based on correspondences was followed by B-spline based deformable registration to register pre- and intra-operative ultrasound. Manually labelled landmarks in pre- and intra-operative ultrasound were used to quantify the mean target registration error. We achieve a mean target registration error of 1.43±0.8 mm when validated with 17 pre- and intra-operative ultrasound image volumes of a public dataset.","PeriodicalId":302939,"journal":{"name":"Medical Imaging: Image-Guided Procedures","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122039510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A standardized method for accuracy study of MRI-compatible robots: case study: a body-mounted robot 核磁共振兼容机器人精度研究的标准化方法:案例研究:一个身体安装机器人
Medical Imaging: Image-Guided Procedures Pub Date : 2020-03-16 DOI: 10.1117/12.2550575
E. Siampli, R. Monfaredi, S. Pieper, Pan Li, Viktoriya Beskin, K. Cleary
{"title":"A standardized method for accuracy study of MRI-compatible robots: case study: a body-mounted robot","authors":"E. Siampli, R. Monfaredi, S. Pieper, Pan Li, Viktoriya Beskin, K. Cleary","doi":"10.1117/12.2550575","DOIUrl":"https://doi.org/10.1117/12.2550575","url":null,"abstract":"","PeriodicalId":302939,"journal":{"name":"Medical Imaging: Image-Guided Procedures","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131929392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic segmentation of spinal ultrasound landmarks with U-net using multiple consecutive images for input 基于U-net的多幅连续图像输入脊柱超声标志的自动分割
Medical Imaging: Image-Guided Procedures Pub Date : 2020-03-16 DOI: 10.1117/12.2549584
V. Wu, T. Ungi, K. Sunderland, Grace Pigeau, Abigael Schonewille, G. Fichtinger
{"title":"Automatic segmentation of spinal ultrasound landmarks with U-net using multiple consecutive images for input","authors":"V. Wu, T. Ungi, K. Sunderland, Grace Pigeau, Abigael Schonewille, G. Fichtinger","doi":"10.1117/12.2549584","DOIUrl":"https://doi.org/10.1117/12.2549584","url":null,"abstract":"PURPOSE: Scoliosis screening is currently only implemented in a few countries due to the lack of a safe and accurate measurement method. Spinal ultrasound is a viable alternative to X-ray, but manual annotation of images is difficult and time consuming. We propose using deep learning through a U-net neural network that takes consecutive images per individual input, as an enhancement over using single images as input for the neural network. METHODS: Ultrasound data was collected from nine healthy volunteers. Images were manually segmented. To accommodate for consecutive input images, the ultrasound images were exported along with previous images stacked to serve as input for a modified U-net. Resulting output segmentations were evaluated based on the percentage of true negative and true positive pixel predictions. RESULTS: After comparing the single to five-image input arrays, the three-image input had the best performance in terms of true positive value. The single input and three-input images were then further tested. The single image input neural network had a true negative rate of 99.79%, and a true positive rate of 63.56%. The three-image input neural network had a true negative rate of 99.75%, and a true positive rate of 66.64%. CONCLUSION: The three-image input network outperformed the single input network in terms of the true positive rate by 3.08%. These findings suggest that using two additional input images consecutively preceding the original image assist the neural network in making more accurate predictions.","PeriodicalId":302939,"journal":{"name":"Medical Imaging: Image-Guided Procedures","volume":"281 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133932923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Optical imaging of dental plaque pH 牙菌斑pH值的光学成像
Medical Imaging: Image-Guided Procedures Pub Date : 2020-03-16 DOI: 10.1117/12.2551322
Chuqin Huang, Manuja Sharma, Lauren K. Lee, Matthew D. Carson, M. Fauver, E. Seibel
{"title":"Optical imaging of dental plaque pH","authors":"Chuqin Huang, Manuja Sharma, Lauren K. Lee, Matthew D. Carson, M. Fauver, E. Seibel","doi":"10.1117/12.2551322","DOIUrl":"https://doi.org/10.1117/12.2551322","url":null,"abstract":"Tooth decay is one of the most common chronic infectious diseases worldwide. Bacteria from the oral biofilm create a local acidic environment that demineralizes the enamel in the caries disease process. By optically imaging plaque pH in pits and fissures and contacting surfaces of teeth, then medicinal therapies can be accurately applied to prevent or monitor the reversal of caries. To achieve this goal, the fluorescence emission from an aqueous solution of sodium fluorescein was measured using a multimodal scanning fiber endoscope (mmSFE). The 1.6-millimeter diameter mmSFE scans 424nm laser light and collects wide-field reflectance for navigational purposes in grayscale at 30 Hz. Two fluorescence channels centered at 520 and 549 nm are acquired and ratiometric analysis produces a pseudo-color overlay of pH. In vitro measurements calibrate the pH heat maps in the range 4.7 to 7.2 pH (0.2 standard deviation). In vivo measurements of a single case study provides informative images of interproximal biofilm before and after a sugar rinse. Post processing a time series of images provides a method that calculates the average pH changes of oral biofilm, replicating the Stephan Curve. These spatio-temporal records of oral biofilm pH can provide a new method of assessing the risk of tooth decay, guide the application of preventative therapies, and provide a quantitative monitor of overall oral health. The non-contact in vivo optical imaging of pH may be extended to measurements of wound healing, tumor environment, and other food processing surfaces since it relies on low power laser light and a US FDA approved dye.","PeriodicalId":302939,"journal":{"name":"Medical Imaging: Image-Guided Procedures","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134462200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Brain deformation compensation for deep brain lead placement surgery: a comparison of simulations driven by surface vs deep brain sparse data 脑深部导联置入手术的脑变形补偿:表面与深部脑稀疏数据驱动的模拟比较
Medical Imaging: Image-Guided Procedures Pub Date : 2020-03-16 DOI: 10.1117/12.2550048
Chen Li, X. Fan, J. Aronson, K. Paulsen
{"title":"Brain deformation compensation for deep brain lead placement surgery: a comparison of simulations driven by surface vs deep brain sparse data","authors":"Chen Li, X. Fan, J. Aronson, K. Paulsen","doi":"10.1117/12.2550048","DOIUrl":"https://doi.org/10.1117/12.2550048","url":null,"abstract":"Accurate surgical placement of electrodes is essential to successful deep brain stimulation (DBS) for patients with neurodegenerative diseases such as Parkinson’s disease. However, the accuracy of pre-operative images used for surgical planning and guidance is often degraded by brain shift during surgery. To predict such intra-operative target deviation due to brain shift, we have developed a finite-element biomechanical model with the assimilation of intraoperative sparse data to compute a whole brain displacement field that updates preoperative images. Previously, modeling with the incorporation of surface sparse data achieved promising results at deep brain structures. However, access to surface data may be limited during a burr hole-based procedure where the size of exposed cortex is too small to acquire adequate intraoperative imaging data. In this paper, our biomechanical brain model was driven by deep brain sparse data that was extracted from lateral ventricles using a Demon’s algorithm and the simulation result was compared against the one resulted from modeling with surface data. Two patient cases were explored in this study where preoperative CT (preCT) and postoperative CT (postCT) were used for the simulation. In patient case one of large symmetrical brain shift, results show that model driven by deep brain sparse data reduced the target registration error(TRE) of preCT from 3.53 to 1.36 and from 1.79 to 1.17 mm at AC and PC, respectively, whereas results from modeling with surface data produced even lower TREs at 0.58 and 0.69mm correspondingly; However, in patient case two of large asymmetrical brain shift, modeling with deep brain sparse data yielded the lowest TRE of 0.68 from 1.73 mm. Results in this study suggest that both surface and deep brain sparse data are capable of reducing the TRE of preoperative images at deep brain landmarks. The success of modeling with the assimilation of deep brain sparse data alone shows the potential of implementing such method in the OR because sparse data at lateral ventricle can be acquired using ultrasound imaging.","PeriodicalId":302939,"journal":{"name":"Medical Imaging: Image-Guided Procedures","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133707486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Multi-body registration for fracture reduction in orthopaedic trauma surgery 骨科创伤手术中骨折复位的多体登记
Medical Imaging: Image-Guided Procedures Pub Date : 2020-03-16 DOI: 10.1117/12.2549708
R. Han, A. Uneri, P. Wu, R. Vijayan, P. Vagdargi, M. Ketcha, N. Sheth, S. Vogt, G. Kleinszig, G. Osgood, J. Siewerdsen
{"title":"Multi-body registration for fracture reduction in orthopaedic trauma surgery","authors":"R. Han, A. Uneri, P. Wu, R. Vijayan, P. Vagdargi, M. Ketcha, N. Sheth, S. Vogt, G. Kleinszig, G. Osgood, J. Siewerdsen","doi":"10.1117/12.2549708","DOIUrl":"https://doi.org/10.1117/12.2549708","url":null,"abstract":"Purpose. Fracture reduction is a challenging part of orthopaedic pelvic trauma procedures, resulting in poor long-term prognosis if reduction does not accurately restore natural morphology. Manual preoperative planning is performed to obtain target transformations of target bones – a process that is challenging and time-consuming even to experts within the rapid workflow of emergent care and fluoroscopically guided surgery. We report a method for fracture reduction planning using a novel image-based registration framework. Method. An objective function is designed to simultaneously register multi-body bone fragments that are preoperatively segmented via a graph-cut method to a pelvic statistical shape model (SSM) with inter-body collision constraints. An alternating optimization strategy switches between fragments alignment and SSM adaptation to solve for the fragment transformations for fracture reduction planning. The method was examined in a leave-one-out study performed over a pelvic atlas with 40 members with two-body and three-body fractures simulated in the left innominate bone with displacements ranging 0–20 mm and 0°–15°. Result. Experiments showed the feasibility of the registration method in both two-body and three-body fracture cases. The segmentations achieved Dice coefficient of median 0.94 (0.01 interquartile range [IQR]) and root mean square error (RMSE) of 2.93 mm (0.56 mm IQR). In two-body fracture cases, fracture reduction planning yielded 3.8 mm (1.6 mm IQR) translational and 2.9° (1.8° IQR) rotational error. Conclusion. The method demonstrated accurate fracture reduction planning within 5 mm and shows promise for future generalization to more complicated fracture cases. The algorithm provides a novel means of planning from preoperative CT images that are already acquired in standard workflow.","PeriodicalId":302939,"journal":{"name":"Medical Imaging: Image-Guided Procedures","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115050626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Automatic needle localization in intraoperative 3D transvaginal ultrasound images for high-dose-rate interstitial gynecologic brachytherapy 高剂量率间质性妇科近距离治疗术中三维阴道超声图像自动定位针
Medical Imaging: Image-Guided Procedures Pub Date : 2020-03-16 DOI: 10.1117/12.2549664
J. R. Rodgers, D. Gillies, W. Hrinivich, I. Gyacskov, A. Fenster
{"title":"Automatic needle localization in intraoperative 3D transvaginal ultrasound images for high-dose-rate interstitial gynecologic brachytherapy","authors":"J. R. Rodgers, D. Gillies, W. Hrinivich, I. Gyacskov, A. Fenster","doi":"10.1117/12.2549664","DOIUrl":"https://doi.org/10.1117/12.2549664","url":null,"abstract":"High-dose-rate interstitial gynecologic brachytherapy requires multiple needles to be inserted into the tumor and surrounding area, avoiding nearby healthy organs-at-risk (OARs), including the bladder and rectum. We propose the use of a 360° three-dimensional (3D) transvaginal ultrasound (TVUS) guidance system for visualization of needles and report on the implementation of two automatic needle segmentation algorithms to aid the localization of needles intraoperatively. Two-dimensional (2D) needle segmentation, allowing for immediate adjustments to needle trajectories to mitigate needle deflection and avoid OARs, was implemented in near real-time using a method based on a convolutional neural network with a U-Net architecture trained on a dataset of 2D ultrasound images from multiple applications with needle-like structures. In 18 unseen TVUS images, the median position difference [95% confidence interval] was 0.27 [0.20, 0.68] mm and mean angular difference was 0.50 [0.27, 1.16]° between manually and algorithmically segmented needles. Automatic needle segmentation was performed in 3D TVUS images using an algorithm leveraging the randomized 3D Hough transform. All needles were accurately localized in a proof-of-concept image with a median position difference of 0.79 [0.62, 0.93] mm and median angular difference of 0.46 [0.31, 0.62]°, when compared to manual segmentations. Further investigation into the robustness of the algorithm to complex cases containing large shadowing, air, or reverberation artefacts is ongoing. Intraoperative automatic needle segmentation in interstitial gynecologic brachytherapy has the potential to improve implant quality and provides the potential for 3D ultrasound to be used for treatment planning, eliminating the requirement for post-insertion CT scans.","PeriodicalId":302939,"journal":{"name":"Medical Imaging: Image-Guided Procedures","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128336749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Assessment of proton beam ablation in myocardial infarct tissue using delayed contrast-enhanced magnetic resonance imaging 应用延迟磁共振增强成像评估质子束消融在心肌梗死组织中的作用
Medical Imaging: Image-Guided Procedures Pub Date : 2020-03-16 DOI: 10.1117/12.2550659
M. Rettmann, S. Hohmann, A. Deisher, H. Konishi, J. Kruse, L. K. Newman, K. D. Parker, M. Herman, D. Packer
{"title":"Assessment of proton beam ablation in myocardial infarct tissue using delayed contrast-enhanced magnetic resonance imaging","authors":"M. Rettmann, S. Hohmann, A. Deisher, H. Konishi, J. Kruse, L. K. Newman, K. D. Parker, M. Herman, D. Packer","doi":"10.1117/12.2550659","DOIUrl":"https://doi.org/10.1117/12.2550659","url":null,"abstract":"Publisher’s Note: This paper, originally published on 16 March 2020, was replaced with a corrected/revised version on 28 April, 2020. If you downloaded the original PDF but are unable to access the revision, please contact SPIE Digital Library Customer Service for assistance.","PeriodicalId":302939,"journal":{"name":"Medical Imaging: Image-Guided Procedures","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130574753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards democratizing AI in MR-based prostate cancer diagnosis: 3.0 to 1.5 Tesla 人工智能在磁共振前列腺癌诊断中的民主化:3.0到1.5特斯拉
Medical Imaging: Image-Guided Procedures Pub Date : 2020-03-16 DOI: 10.1117/12.2549413
Andrew Grebenisan, A. Sedghi, A. Menard, J. Izard, R. Siemens, P. Mousavi
{"title":"Towards democratizing AI in MR-based prostate cancer diagnosis: 3.0 to 1.5 Tesla","authors":"Andrew Grebenisan, A. Sedghi, A. Menard, J. Izard, R. Siemens, P. Mousavi","doi":"10.1117/12.2549413","DOIUrl":"https://doi.org/10.1117/12.2549413","url":null,"abstract":"","PeriodicalId":302939,"journal":{"name":"Medical Imaging: Image-Guided Procedures","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130333756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Automatic segmentation of brain tumor in intraoperative ultrasound images using 3D U-Net 基于3D U-Net的术中超声图像脑肿瘤自动分割
Medical Imaging: Image-Guided Procedures Pub Date : 2020-03-16 DOI: 10.1117/12.2549516
F. Carton, M. Chabanas, B. K. R. Munkvold, I. Reinertsen, J. Noble
{"title":"Automatic segmentation of brain tumor in intraoperative ultrasound images using 3D U-Net","authors":"F. Carton, M. Chabanas, B. K. R. Munkvold, I. Reinertsen, J. Noble","doi":"10.1117/12.2549516","DOIUrl":"https://doi.org/10.1117/12.2549516","url":null,"abstract":"Because of the deformation of the brain during neurosurgery, intraoperative imaging can be used to visualize the actual location of the brain structures. These images are used for image-guided navigation as well as determining whether the resection is complete and localizing the remaining tumor tissue. Intraoperative ultrasound (iUS) is a convenient modality with short acquisition times. However, iUS images are difficult to interpret because of the noise and artifacts. In particular, tumor tissue is difficult to distinguish from healthy tissue and it is very difficult to delimit tumors in iUS images. In this paper, we propose an automatic method to segment low grade brain tumors in iUS images using a 2-D and 3-D U-Net. We trained the networks on three folds with twelve training cases and five test cases each. The obtained results are promising, with a median Dice score of 0.72. The volume differences between the estimated and ground truth segmentations were similar to the intra-rater volume differences. While these results are preliminary, they suggest that deep learning methods can be successfully applied to tumor segmentation in intraoperative images.","PeriodicalId":302939,"journal":{"name":"Medical Imaging: Image-Guided Procedures","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128031621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信