IEEE Journal of Biomedical and Health Informatics最新文献

筛选
英文 中文
Acoustic COVID-19 Detection Using Multiple Instance Learning. 利用多实例学习进行声学 COVID-19 检测。
IF 6.7 2区 医学
IEEE Journal of Biomedical and Health Informatics Pub Date : 2024-10-04 DOI: 10.1109/JBHI.2024.3474975
Michael Reiter, Pernkopf Franz
{"title":"Acoustic COVID-19 Detection Using Multiple Instance Learning.","authors":"Michael Reiter, Pernkopf Franz","doi":"10.1109/JBHI.2024.3474975","DOIUrl":"10.1109/JBHI.2024.3474975","url":null,"abstract":"<p><p>In the COVID-19 pandemic, a rigorous testing scheme was crucial. However, tests can be time-consuming and expensive. A machine learning-based diagnostic tool for audio recordings could enable widespread testing at low costs. In order to achieve comparability between such algorithms, the DiCOVA challenge was created. It is based on the Coswara dataset offering the recording categories cough, speech, breath and vowel phonation. Recording durations vary greatly, ranging from one second to over a minute. A base model is pre-trained on random, short time intervals. Subsequently, a Multiple Instance Learning (MIL) model based on self-attention is incorporated to make collective predictions for multiple time segments within each audio recording, taking advantage of longer durations. In order to compete in the fusion category of the DiCOVA challenge, we utilize a linear regression approach among other fusion methods to combine predictions from the most successful models associated with each sound modality. The application of the MIL approach significantly improves generalizability, leading to an AUC ROC score of 86.6% in the fusion category. By incorporating previously unused data, including the sound modality 'sustained vowel phonation' and patient metadata, we were able to significantly improve our previous results reaching a score of 92.2%.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":null,"pages":null},"PeriodicalIF":6.7,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142375328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BioSAM: Generating SAM Prompts From Superpixel Graph for Biological Instance Segmentation. BioSAM:从超像素图生成用于生物实例分割的 SAM 提示。
IF 6.7 2区 医学
IEEE Journal of Biomedical and Health Informatics Pub Date : 2024-10-04 DOI: 10.1109/JBHI.2024.3474706
Miaomiao Cai, Xiaoyu Liu, Zhiwei Xiong, Xuejin Chen
{"title":"BioSAM: Generating SAM Prompts From Superpixel Graph for Biological Instance Segmentation.","authors":"Miaomiao Cai, Xiaoyu Liu, Zhiwei Xiong, Xuejin Chen","doi":"10.1109/JBHI.2024.3474706","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3474706","url":null,"abstract":"<p><p>Proposal-free instance segmentation methods have significantly advanced the field of biological image analysis. Recently, the Segment Anything Model (SAM) has shown an extraordinary ability to handle challenging instance boundaries. However, directly applying SAM to biological images that contain instances with complex morphologies and dense distributions fails to yield satisfactory results. In this work, we propose BioSAM, a new biological instance segmentation framework generating SAM prompts from a superpixel graph. Specifically, to avoid over-merging, we first generate sufficient superpixels as graph nodes and construct an initialized graph. We then generate initial prompts from each superpixel and aggregate them through a graph neural network (GNN) by predicting the relationship of superpixels to avoid over-segmentation. We employ the SAM encoder embeddings and the SAM-assisted superpixel similarity as new features for the graph to enhance its discrimination capability. With the graph-based prompt aggregation, we utilize the aggregated prompts in SAM to refine the segmentation and generate more accurate instance boundaries. Comprehensive experiments on four representative biological datasets demonstrate that our proposed method outperforms state-of-the-art methods.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":null,"pages":null},"PeriodicalIF":6.7,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142375329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transformer3: A Pure Transformer Framework for fMRI-Based Representations of Human Brain Function. Transformer3:基于 fMRI 的人脑功能表征的纯转换器框架。
IF 6.7 2区 医学
IEEE Journal of Biomedical and Health Informatics Pub Date : 2024-10-04 DOI: 10.1109/JBHI.2024.3471186
Xiaoxi Tian, Hao Ma, Yun Guan, Le Xu, Jiangcong Liu, Lixia Tian
{"title":"Transformer<sup>3</sup>: A Pure Transformer Framework for fMRI-Based Representations of Human Brain Function.","authors":"Xiaoxi Tian, Hao Ma, Yun Guan, Le Xu, Jiangcong Liu, Lixia Tian","doi":"10.1109/JBHI.2024.3471186","DOIUrl":"10.1109/JBHI.2024.3471186","url":null,"abstract":"<p><p>Effective representation learning is essential for neuroimage-based individualized predictions. Numerous studies have been performed on fMRI-based individualized predictions, leveraging sample-wise, spatial, and temporal interdependencies hidden in fMRI data. However, these studies failed to fully utilize the effective information hidden in fMRI data, as only one or two types of the interdependencies were analyzed. To effectively extract representations of human brain function through fully leveraging the three types of the interdependencies, we establish a pure transformer-based framework, Transformer3, leveraging transformer's strong ability to capture interdependencies within the input data. Transformer<sup>3</sup> consists mainly of three transformer modules, with the Batch Transformer module used for addressing sample-wise similarities and differences, the Region Transformer module used for handling complex spatial interdependencies among brain regions, and the Time Transformer module used for capturing temporal interdependencies across time points. Experiments on age, IQ, and sex predictions based on two public datasets demonstrate the effectiveness of the proposed Transformer3. As the only hypothesis is that sample-wise, spatial, and temporal interdependencies extensively exist within the input data, the proposed Transformer<sup>3</sup> can be widely used for representation learning based on multivariate time-series. Furthermore, the pure transformer framework makes it quite convenient for understanding the driving factors underlying the predictive models based on Transformer<sup>3</sup>.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":null,"pages":null},"PeriodicalIF":6.7,"publicationDate":"2024-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142375330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Journal of Biomedical and Health Informatics Publication Information IEEE 生物医学与健康信息学杂志》出版信息
IF 6.7 2区 医学
IEEE Journal of Biomedical and Health Informatics Pub Date : 2024-10-03 DOI: 10.1109/JBHI.2024.3451609
{"title":"IEEE Journal of Biomedical and Health Informatics Publication Information","authors":"","doi":"10.1109/JBHI.2024.3451609","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3451609","url":null,"abstract":"","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":null,"pages":null},"PeriodicalIF":6.7,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10704795","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142376760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Guest Editorial Machine Learning Technology for Biomedical Signal Processing 特邀编辑 生物医学信号处理的机器学习技术
IF 6.7 2区 医学
IEEE Journal of Biomedical and Health Informatics Pub Date : 2024-10-03 DOI: 10.1109/JBHI.2024.3451828
Aceng Sambas;Sundarapandian Vaidyanathan;Azwa Abdul Aziz
{"title":"Guest Editorial Machine Learning Technology for Biomedical Signal Processing","authors":"Aceng Sambas;Sundarapandian Vaidyanathan;Azwa Abdul Aziz","doi":"10.1109/JBHI.2024.3451828","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3451828","url":null,"abstract":"Biomedical signal processing integrates the evaluation of health measures for the purpose of delivering significant factor diagnostic information [1]. Real-time monitoring features enabled by biomedical signal processing can lead to better chronic condition control and timely identification of hazardous occurrences. The use of a remote database for cloud computing in biomedical signal processing has significant implications for healthcare applications. To guarantee effective analysis, optimal data presentation, and product quality, innovative research and ideas need to be implemented.","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":null,"pages":null},"PeriodicalIF":6.7,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10704797","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142376903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Journal of Biomedical and Health Informatics Information for Authors IEEE 生物医学与健康信息学杂志》作者须知
IF 6.7 2区 医学
IEEE Journal of Biomedical and Health Informatics Pub Date : 2024-10-03 DOI: 10.1109/JBHI.2024.3451605
{"title":"IEEE Journal of Biomedical and Health Informatics Information for Authors","authors":"","doi":"10.1109/JBHI.2024.3451605","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3451605","url":null,"abstract":"","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":null,"pages":null},"PeriodicalIF":6.7,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10704801","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142377000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fine-grained Fidgety Movement Classification using Active Learning. 利用主动学习进行细粒度浮躁动作分类
IF 6.7 2区 医学
IEEE Journal of Biomedical and Health Informatics Pub Date : 2024-10-03 DOI: 10.1109/JBHI.2024.3473947
Romero Morais, Truyen Tran, Caroline Alexander, Natasha Amery, Catherine Morgan, Alicia Spittle, Vuong Le, Nadia Badawi, Alison Salt, Jane Valentine, Catherine Elliott, Elizabeth M Hurrion, Paul A Dawson, Svetha Venkatesh
{"title":"Fine-grained Fidgety Movement Classification using Active Learning.","authors":"Romero Morais, Truyen Tran, Caroline Alexander, Natasha Amery, Catherine Morgan, Alicia Spittle, Vuong Le, Nadia Badawi, Alison Salt, Jane Valentine, Catherine Elliott, Elizabeth M Hurrion, Paul A Dawson, Svetha Venkatesh","doi":"10.1109/JBHI.2024.3473947","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3473947","url":null,"abstract":"<p><p>Typically developing infants, between the corrected age of 9-20 weeks, produce fidgety movements. These movements can be identified with the General Movement Assessment, but their identification requires trained professionals to conduct the assessment from video recordings. Since trained professionals are expensive and their demand may be higher than their availability, computer vision-based solutions have been developed to assist practitioners. However, most solutions to date treat the problem as a direct mapping from video to infant status, without modeling fidgety movements throughout the video. To address that, we propose to directly model infants' short movements and classify them as fidgety or non-fidgety. In this way, we model the explanatory factor behind the infant's status and improve model interpretability. The issue with our proposal is that labels for an infant's short movements are not available, which precludes us to train such a model. We overcome this issue with active learning. Active learning is a framework that minimizes the amount of labeled data required to train a model, by only labeling examples that are considered \"informative\" to the model. The assumption is that a model trained on informative examples reaches a higher performance level than a model trained with randomly selected examples. We validate our framework by modeling the movements of infants' hips on two representative cohorts: typically developing and at-risk infants. Our results show that active learning is suitable to our problem and that it works adequately even when the models are trained with labels provided by a novice annotator.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":null,"pages":null},"PeriodicalIF":6.7,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142371653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Early Alzheimer's Disease Detection Through Big Data and Ensemble Few-Shot Learning. 通过大数据和 "少数几个镜头的集合学习 "提高早期阿尔茨海默病的检测能力
IF 6.7 2区 医学
IEEE Journal of Biomedical and Health Informatics Pub Date : 2024-10-02 DOI: 10.1109/JBHI.2024.3473541
Safa Ben Atitallah, Maha Driss, Wadii Boulila, Anis Koubaa
{"title":"Enhancing Early Alzheimer's Disease Detection Through Big Data and Ensemble Few-Shot Learning.","authors":"Safa Ben Atitallah, Maha Driss, Wadii Boulila, Anis Koubaa","doi":"10.1109/JBHI.2024.3473541","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3473541","url":null,"abstract":"<p><p>Alzheimer's disease is a severe brain disorder that causes harm in various brain areas and leads to memory damage. The limited availability of labeled medical data poses a significant challenge for accurate Alzheimer's disease detection. There is a critical need for effective methods to improve the accuracy of Alzheimer's disease detection, considering the scarcity of labeled data, the complexity of the disease, and the constraints related to data privacy. To address this challenge, our study leverages the power of Big Data in the form of pre-trained Convolutional Neural Networks (CNNs) within the framework of Few-Shot Learning (FSL) and ensemble learning. We propose an ensemble approach based on a Prototypical Network (ProtoNet), a powerful method in FSL, integrating various pre-trained CNNs as encoders. This integration enhances the richness of features extracted from medical images. Our approach also includes a combination of class-aware loss and entropy loss to ensure a more precise classification of Alzheimer's disease progression levels. The effectiveness of our method was evaluated using two datasets, the Kaggle Alzheimer dataset, and the ADNI dataset, achieving an accuracy of 99.72% and 99.86%, respectively. The comparison of our results with relevant state-of-the-art studies demonstrated that our approach achieved superior accuracy and highlighted its validity and potential for real-world applications in early Alzheimer's disease detection.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":null,"pages":null},"PeriodicalIF":6.7,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142365093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Longitudinal Alzheimer's Disease Progression Prediction with Modality Uncertainty and Optimization of Information Flow. 具有模态不确定性的阿尔茨海默病纵向进展预测与信息流优化
IF 6.7 2区 医学
IEEE Journal of Biomedical and Health Informatics Pub Date : 2024-10-02 DOI: 10.1109/JBHI.2024.3472462
Duy-Phuong Dao, Hyung-Jeong Yang, Jahae Kim, Ngoc-Huynh Ho
{"title":"Longitudinal Alzheimer's Disease Progression Prediction with Modality Uncertainty and Optimization of Information Flow.","authors":"Duy-Phuong Dao, Hyung-Jeong Yang, Jahae Kim, Ngoc-Huynh Ho","doi":"10.1109/JBHI.2024.3472462","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3472462","url":null,"abstract":"<p><p>Alzheimer's disease (AD) is a global neurodegenerative disorder that affects millions of individuals worldwide. Actual AD imaging datasets challenge the construction of reliable longitudinal models owing to imaging modality uncertainty. In addition, they are still unable to retain or obtain important information during disease progression from previous to followup time points. For example, the output values of current gates in recurrent models should be close to a specific value that indicates the model is uncertain about retaining or forgetting information. In this study, we propose a model which can extract and constrain each modality into a common representation space to capture intermodality interactions among different modalities associated with modality uncertainty to predict AD progression. In addition, we provide an auxiliary function to enhance the ability of recurrent gate robustly and effectively in controlling the flow of information over time using longitudinal data. We conducted comparative analysis on data from the Alzheimer's Disease Neuroimaging Initiative database. Our model outperformed other methods across all evaluation metrics. Therefore, the proposed model provides a promising solution for addressing modality uncertainty challenges in multimodal longitudinal AD progression prediction.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":null,"pages":null},"PeriodicalIF":6.7,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142365095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Dual-Branch Cross-Modality-Attention Network for Thyroid Nodule Diagnosis Based on Ultrasound Images and Contrast-Enhanced Ultrasound Videos. 基于超声波图像和对比增强超声波视频的甲状腺结节诊断双分支跨模态注意力网络
IF 6.7 2区 医学
IEEE Journal of Biomedical and Health Informatics Pub Date : 2024-10-02 DOI: 10.1109/JBHI.2024.3472609
Jianning Chi, Jia-Hui Chen, Bo Wu, Jin Zhao, Kai Wang, Xiaosheng Yu, Wenjun Zhang, Ying Huang
{"title":"A Dual-Branch Cross-Modality-Attention Network for Thyroid Nodule Diagnosis Based on Ultrasound Images and Contrast-Enhanced Ultrasound Videos.","authors":"Jianning Chi, Jia-Hui Chen, Bo Wu, Jin Zhao, Kai Wang, Xiaosheng Yu, Wenjun Zhang, Ying Huang","doi":"10.1109/JBHI.2024.3472609","DOIUrl":"https://doi.org/10.1109/JBHI.2024.3472609","url":null,"abstract":"<p><p>Contrast-enhanced ultrasound (CEUS) has been extensively employed as an imaging modality in thyroid nodule diagnosis due to its capacity to visualise the distribution and circulation of micro-vessels in organs and lesions in a non-invasive manner. However, current CEUS-based thyroid nodule diagnosis methods suffered from: 1) the blurred spatial boundaries between nodules and other anatomies in CEUS videos, and 2) the insufficient representations of the local structural information of nodule tissues by the features extracted only from CEUS videos. In this paper, we propose a novel dual-branch network with a cross-modality-attention mechanism for thyroid nodule diagnosis by integrating the information from tow related modalities, i.e., CEUS videos and ultrasound image. The mechanism has two parts: US-attention-from-CEUS transformer (UAC-T) and CEUS-attention-from-US transformer (CAU-T). As such, this network imitates the manner of human radiologists by decomposing the diagnosis into two correlated tasks: 1) the spatio-temporal features extracted from CEUS are hierarchically embedded into the spatial features extracted from US with UAC-T for the nodule segmentation; 2) the US spatial features are used to guide the extraction of the CEUS spatio-temporal features with CAU-T for the nodule classification. The two tasks are intertwined in the dual-branch end-to-end network and optimized with the multi-task learning (MTL) strategy. The proposed method is evaluated on our collected thyroid US-CEUS dataset. Experimental results show that our method achieves the classification accuracy of 86.92%, specificity of 66.41%, and sensitivity of 97.01%, outperforming the state-of-the-art methods. As a general contribution in the field of multi-modality diagnosis of diseases, the proposed method has provided an effective way to combine static information with its related dynamic information, improving the quality of deep learning based diagnosis with an additional benefit of explainability.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":null,"pages":null},"PeriodicalIF":6.7,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142365089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信