IEEE Transactions on Medical Imaging最新文献

筛选
英文 中文
Towards Semantically-Consistent Deformable 2D-3D Registration for 3D Craniofacial Structure Estimation from A Single-View Lateral Cephalometric Radiograph 实现语义一致的可变形 2D-3D 注册,根据单视角头侧 X 光片进行三维颅面结构估算
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2024-09-09 DOI: 10.1109/tmi.2024.3456251
Yikun Jiang, Yuru Pei, Tianmin Xu, Xiaoru Yuan, Hongbin Zha
{"title":"Towards Semantically-Consistent Deformable 2D-3D Registration for 3D Craniofacial Structure Estimation from A Single-View Lateral Cephalometric Radiograph","authors":"Yikun Jiang, Yuru Pei, Tianmin Xu, Xiaoru Yuan, Hongbin Zha","doi":"10.1109/tmi.2024.3456251","DOIUrl":"https://doi.org/10.1109/tmi.2024.3456251","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"82 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142160539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Full-wave Image Reconstruction in Transcranial Photoacoustic Computed Tomography using a Finite Element Method 使用有限元法重建经颅光声计算机断层扫描的全波图像
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2024-09-09 DOI: 10.1109/tmi.2024.3456595
Yilin Luo, Hsuan-Kai Huang, Karteekeya Sastry, Peng Hu, Xin Tong, Joseph Kuo, Yousuf Aborahama, Shuai Na, Umberto Villa, Mark A. Anastasio, Lihong V. Wang
{"title":"Full-wave Image Reconstruction in Transcranial Photoacoustic Computed Tomography using a Finite Element Method","authors":"Yilin Luo, Hsuan-Kai Huang, Karteekeya Sastry, Peng Hu, Xin Tong, Joseph Kuo, Yousuf Aborahama, Shuai Na, Umberto Villa, Mark A. Anastasio, Lihong V. Wang","doi":"10.1109/tmi.2024.3456595","DOIUrl":"https://doi.org/10.1109/tmi.2024.3456595","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"22 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142160540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attention-Guided Learning with Feature Reconstruction for Skin Lesion Diagnosis using Clinical and Ultrasound Images 利用临床和超声图像进行皮肤病变诊断的注意力引导学习与特征重构
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2024-08-29 DOI: 10.1109/tmi.2024.3450682
Chunlun Xiao, Anqi Zhu, Chunmei Xia, Zifeng Qiu, Yuanlin Liu, Cheng Zhao, Weiwei Ren, Lifan Wang, Lei Dong, Tianfu Wang, Lehang Guo, Baiying Lei
{"title":"Attention-Guided Learning with Feature Reconstruction for Skin Lesion Diagnosis using Clinical and Ultrasound Images","authors":"Chunlun Xiao, Anqi Zhu, Chunmei Xia, Zifeng Qiu, Yuanlin Liu, Cheng Zhao, Weiwei Ren, Lifan Wang, Lei Dong, Tianfu Wang, Lehang Guo, Baiying Lei","doi":"10.1109/tmi.2024.3450682","DOIUrl":"https://doi.org/10.1109/tmi.2024.3450682","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"8 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142100751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IMJENSE: Scan-specific Implicit Representation for Joint Coil Sensitivity and Image Estimation in Parallel MRI IMJENSE:用于并行磁共振成像中关节线圈灵敏度和图像估计的特定扫描隐式表示法
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2023-11-21 DOI: 10.48550/arXiv.2311.12892
Rui-jun Feng, Qing Wu, Jie Feng, Huajun She, Chunlei Liu, Yuyao Zhang, Hongjiang Wei
{"title":"IMJENSE: Scan-specific Implicit Representation for Joint Coil Sensitivity and Image Estimation in Parallel MRI","authors":"Rui-jun Feng, Qing Wu, Jie Feng, Huajun She, Chunlei Liu, Yuyao Zhang, Hongjiang Wei","doi":"10.48550/arXiv.2311.12892","DOIUrl":"https://doi.org/10.48550/arXiv.2311.12892","url":null,"abstract":"Parallel imaging is a commonly used technique to accelerate magnetic resonance imaging (MRI) data acquisition. Mathematically, parallel MRI reconstruction can be formulated as an inverse problem relating the sparsely sampled k-space measurements to the desired MRI image. Despite the success of many existing reconstruction algorithms, it remains a challenge to reliably reconstruct a high-quality image from highly reduced k-space measurements. Recently, implicit neural representation has emerged as a powerful paradigm to exploit the internal information and the physics of partially acquired data to generate the desired object. In this study, we introduced IMJENSE, a scan-specific implicit neural representation-based method for improving parallel MRI reconstruction. Specifically, the underlying MRI image and coil sensitivities were modeled as continuous functions of spatial coordinates, parameterized by neural networks and polynomials, respectively. The weights in the networks and coefficients in the polynomials were simultaneously learned directly from sparsely acquired k-space measurements, without fully sampled ground truth data for training. Benefiting from the powerful continuous representation and joint estimation of the MRI image and coil sensitivities, IMJENSE outperforms conventional image or k-space domain reconstruction algorithms. With extremely limited calibration data, IMJENSE is more stable than supervised calibrationless and calibration-based deep-learning methods. Results show that IMJENSE robustly reconstructs the images acquired at 5× and 6× accelerations with only 4 or 8 calibration lines in 2D Cartesian acquisitions, corresponding to 22.0% and 19.5% undersampling rates. The high-quality results and scanning specificity make the proposed method hold the potential for further accelerating the data acquisition of parallel MRI.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"202 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2023-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139254245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Learnable Counter-condition Analysis Framework for Functional Connectivity-based Neurological Disorder Diagnosis 基于功能连接的神经疾病诊断的可学习反条件分析框架
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2023-10-06 DOI: 10.48550/arXiv.2310.03964
Eunsong Kang, Da-Woon Heo, Jiwon Lee, Heung-Il Suk
{"title":"A Learnable Counter-condition Analysis Framework for Functional Connectivity-based Neurological Disorder Diagnosis","authors":"Eunsong Kang, Da-Woon Heo, Jiwon Lee, Heung-Il Suk","doi":"10.48550/arXiv.2310.03964","DOIUrl":"https://doi.org/10.48550/arXiv.2310.03964","url":null,"abstract":"To understand the biological characteristics of neurological disorders with functional connectivity (FC), recent studies have widely utilized deep learning-based models to identify the disease and conducted post-hoc analyses via explainable models to discover disease-related biomarkers. Most existing frameworks consist of three stages, namely, feature selection, feature extraction for classification, and analysis, where each stage is implemented separately. However, if the results at each stage lack reliability, it can cause misdiagnosis and incorrect analysis in afterward stages. In this study, we propose a novel unified framework that systemically integrates diagnoses (i.e., feature selection and feature extraction) and explanations. Notably, we devised an adaptive attention network as a feature selection approach to identify individual-specific disease-related connections. We also propose a functional network relational encoder that summarizes the global topological properties of FC by learning the inter-network relations without pre-defined edges between functional networks. Last but not least, our framework provides a novel explanatory power for neuroscientific interpretation, also termed counter-condition analysis. We simulated the FC that reverses the diagnostic information (i.e., counter-condition FC): converting a normal brain to be abnormal and vice versa. We validated the effectiveness of our framework by using two large resting-state functional magnetic resonance imaging (fMRI) datasets, Autism Brain Imaging Data Exchange (ABIDE) and REST-meta-MDD, and demonstrated that our framework outperforms other competing methods for disease identification. Furthermore, we analyzed the disease-related neurological patterns based on counter-condition analysis.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"8 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139322196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Masked conditional variational autoencoders for chromosome straightening 用于染色体拉直的掩蔽条件变分自动编码器
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2023-06-25 DOI: 10.48550/arXiv.2306.14129
Jingxiong Li, S. Zheng, Zhongyi Shui, Shichuan Zhang, Linyi Yang, Yuxuan Sun, Yunlong Zhang, Honglin Li, Y. Ye, P. V. Ooijen, Kang Li, Lin Yang
{"title":"Masked conditional variational autoencoders for chromosome straightening","authors":"Jingxiong Li, S. Zheng, Zhongyi Shui, Shichuan Zhang, Linyi Yang, Yuxuan Sun, Yunlong Zhang, Honglin Li, Y. Ye, P. V. Ooijen, Kang Li, Lin Yang","doi":"10.48550/arXiv.2306.14129","DOIUrl":"https://doi.org/10.48550/arXiv.2306.14129","url":null,"abstract":"Karyotyping is of importance for detecting chromosomal aberrations in human disease. However, chromosomes easily appear curved in microscopic images, which prevents cytogeneticists from analyzing chromosome types. To address this issue, we propose a framework for chromosome straightening, which comprises a preliminary processing algorithm and a generative model called masked conditional variational autoencoders (MC-VAE). The processing method utilizes patch rearrangement to address the difficulty in erasing low degrees of curvature, providing reasonable preliminary results for the MC-VAE. The MC-VAE further straightens the results by leveraging chromosome patches conditioned on their curvatures to learn the mapping between banding patterns and conditions. During model training, we apply a masking strategy with a high masking ratio to train the MC-VAE with eliminated redundancy. This yields a non-trivial reconstruction task, allowing the model to effectively preserve chromosome banding patterns and structure details in the reconstructed results. Extensive experiments on three public datasets with two stain styles show that our framework surpasses the performance of state-of-the-art methods in retaining banding patterns and structure details. Compared to using real-world bent chromosomes, the use of high-quality straightened chromosomes generated by our proposed method can improve the performance of various deep learning models for chromosome classification by a large margin. Such a straightening approach has the potential to be combined with other karyotyping systems to assist cytogeneticists in chromosome analysis.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":" ","pages":""},"PeriodicalIF":10.6,"publicationDate":"2023-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47136654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Laplacian Pyramid Based Generative H&E Stain Augmentation Network 基于拉普拉斯金字塔的生成H&E染色增强网络
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2023-05-23 DOI: 10.48550/arXiv.2305.14301
Fangda Li, Zhiqiang Hu, Wen Chen, A. Kak
{"title":"A Laplacian Pyramid Based Generative H&E Stain Augmentation Network","authors":"Fangda Li, Zhiqiang Hu, Wen Chen, A. Kak","doi":"10.48550/arXiv.2305.14301","DOIUrl":"https://doi.org/10.48550/arXiv.2305.14301","url":null,"abstract":"Hematoxylin and Eosin (H&E) staining is a widely used sample preparation procedure for enhancing the saturation of tissue sections and the contrast between nuclei and cytoplasm in histology images for medical diagnostics. However, various factors, such as the differences in the reagents used, result in high variability in the colors of the stains actually recorded. This variability poses a challenge in achieving generalization for machine-learning based computer-aided diagnostic tools. To desensitize the learned models to stain variations, we propose the Generative Stain Augmentation Network (G-SAN) - a GAN-based framework that augments a collection of cell images with simulated yet realistic stain variations. At its core, G-SAN uses a novel and highly computationally efficient Laplacian Pyramid (LP) based generator architecture, that is capable of disentangling stain from cell morphology. Through the task of patch classification and nucleus segmentation, we show that using G-SAN-augmented training data provides on average 15.7% improvement in F1 score and 7.3% improvement in panoptic quality, respectively. Our code is available at https://github.com/lifangda01/GSAN-Demo.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":" ","pages":""},"PeriodicalIF":10.6,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46662981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Deep Learning for Retrospective Motion Correction in MRI: A Comprehensive Review 深度学习在MRI回顾性运动矫正中的应用综述
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2023-05-11 DOI: 10.48550/arXiv.2305.06739
Veronika Spieker, H. Eichhorn, K. Hammernik, D. Rueckert, C. Preibisch, D. Karampinos, J. Schnabel
{"title":"Deep Learning for Retrospective Motion Correction in MRI: A Comprehensive Review","authors":"Veronika Spieker, H. Eichhorn, K. Hammernik, D. Rueckert, C. Preibisch, D. Karampinos, J. Schnabel","doi":"10.48550/arXiv.2305.06739","DOIUrl":"https://doi.org/10.48550/arXiv.2305.06739","url":null,"abstract":"Motion represents one of the major challenges in magnetic resonance imaging (MRI). Since the MR signal is acquired in frequency space, any motion of the imaged object leads to complex artefacts in the reconstructed image in addition to other MR imaging artefacts. Deep learning has been frequently proposed for motion correction at several stages of the reconstruction process. The wide range of MR acquisition sequences, anatomies and pathologies of interest, and motion patterns (rigid vs. deformable and random vs. regular) makes a comprehensive solution unlikely. To facilitate the transfer of ideas between different applications, this review provides a detailed overview of proposed methods for learning-based motion correction in MRI together with their common challenges and potentials. This review identifies differences and synergies in underlying data usage, architectures, training and evaluation strategies. We critically discuss general trends and outline future directions, with the aim to enhance interaction between different application areas and research fields.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":" ","pages":""},"PeriodicalIF":10.6,"publicationDate":"2023-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42662913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
FVP: Fourier Visual Prompting for Source-Free Unsupervised Domain Adaptation of Medical Image Segmentation 基于傅立叶视觉提示的无源无监督域医学图像分割
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2023-04-26 DOI: 10.48550/arXiv.2304.13672
Yan Wang, Jian Cheng, Yixin Chen, Shuai Shao, Lanyun Zhu, Zhenzhou Wu, T. Liu, Haogang Zhu
{"title":"FVP: Fourier Visual Prompting for Source-Free Unsupervised Domain Adaptation of Medical Image Segmentation","authors":"Yan Wang, Jian Cheng, Yixin Chen, Shuai Shao, Lanyun Zhu, Zhenzhou Wu, T. Liu, Haogang Zhu","doi":"10.48550/arXiv.2304.13672","DOIUrl":"https://doi.org/10.48550/arXiv.2304.13672","url":null,"abstract":"Medical image segmentation methods normally perform poorly when there is a domain shift between training and testing data. Unsupervised Domain Adaptation (UDA) addresses the domain shift problem by training the model using both labeled data from the source domain and unlabeled data from the target domain. Source-Free UDA (SFUDA) was recently proposed for UDA without requiring the source data during the adaptation, due to data privacy or data transmission issues, which normally adapts the pre-trained deep model in the testing stage. However, in real clinical scenarios of medical image segmentation, the trained model is normally frozen in the testing stage. In this paper, we propose Fourier Visual Prompting (FVP) for SFUDA of medical image segmentation. Inspired by prompting learning in natural language processing, FVP steers the frozen pre-trained model to perform well in the target domain by adding a visual prompt to the input target data. In FVP, the visual prompt is parameterized using only a small amount of low-frequency learnable parameters in the input frequency space, and is learned by minimizing the segmentation loss between the predicted segmentation of the prompted target image and reliable pseudo segmentation label of the target image under the frozen model. To our knowledge, FVP is the first work to apply visual prompts to SFUDA for medical image segmentation. The proposed FVP is validated using three public datasets, and experiments demonstrate that FVP yields better segmentation results, compared with various existing methods.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":" ","pages":""},"PeriodicalIF":10.6,"publicationDate":"2023-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43372805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Point-supervised Single-cell Segmentation via Collaborative Knowledge Sharing 基于协作知识共享的点监督单细胞分割
IF 10.6 1区 医学
IEEE Transactions on Medical Imaging Pub Date : 2023-04-20 DOI: 10.48550/arXiv.2304.10671
Ji Yu
{"title":"Point-supervised Single-cell Segmentation via Collaborative Knowledge Sharing","authors":"Ji Yu","doi":"10.48550/arXiv.2304.10671","DOIUrl":"https://doi.org/10.48550/arXiv.2304.10671","url":null,"abstract":"Despite their superior performance, deep-learning methods often suffer from the disadvantage of needing large-scale well-annotated training data. In response, recent literature has seen a proliferation of efforts aimed at reducing the annotation burden. This paper focuses on a weakly-supervised training setting for single-cell segmentation models, where the only available training label is the rough locations of individual cells. The specific problem is of practical interest due to the widely available nuclei counter-stain data in biomedical literature, from which the cell locations can be derived programmatically. Of more general interest is a proposed self-learning method called collaborative knowledge sharing, which is related to but distinct from the more well-known consistency learning methods. This strategy achieves self-learning by sharing knowledge between a principal model and a very light-weight collaborator model. Importantly, the two models are entirely different in their architectures, capacities, and model outputs: In our case, the principal model approaches the segmentation problem from an object-detection perspective, whereas the collaborator model a sematic segmentation perspective. We assessed the effectiveness of this strategy by conducting experiments on LIVECell, a large single-cell segmentation dataset of bright-field images, and on A431 dataset, a fluorescence image dataset in which the location labels are generated automatically from nuclei counter-stain data. Implementing code is available at https://github.com/jiyuuchc/lacss.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":" ","pages":""},"PeriodicalIF":10.6,"publicationDate":"2023-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42725429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信