Proceedings of the ACM Conference on Health, Inference, and Learning最新文献

筛选
英文 中文
CheXternal: generalization of deep learning models for chest X-ray interpretation to photos of chest X-rays and external clinical settings CheXternal:将胸部x线解读的深度学习模型推广到胸部x线照片和外部临床设置
Proceedings of the ACM Conference on Health, Inference, and Learning Pub Date : 2021-02-17 DOI: 10.1145/3450439.3451876
P. Rajpurkar, Anirudh Joshi, A. Pareek, A. Ng, M. Lungren
{"title":"CheXternal: generalization of deep learning models for chest X-ray interpretation to photos of chest X-rays and external clinical settings","authors":"P. Rajpurkar, Anirudh Joshi, A. Pareek, A. Ng, M. Lungren","doi":"10.1145/3450439.3451876","DOIUrl":"https://doi.org/10.1145/3450439.3451876","url":null,"abstract":"Recent advances in training deep learning models have demonstrated the potential to provide accurate chest X-ray interpretation and increase access to radiology expertise. However, poor generalization due to data distribution shifts in clinical settings is a key barrier to implementation. In this study, we measured the diagnostic performance for 8 different chest X-ray models when applied to (1) smartphone photos of chest X-rays and (2) external datasets without any finetuning. All models were developed by different groups and submitted to the CheXpert challenge, and re-applied to test datasets without further tuning. We found that (1) on photos of chest X-rays, all 8 models experienced a statistically significant drop in task performance, but only 3 performed significantly worse than radiologists on average, and (2) on the external set, none of the models performed statistically significantly worse than radiologists, and five models performed statistically significantly better than radiologists. Our results demonstrate that some chest X-ray models, under clinically relevant distribution shifts, were comparable to radiologists while other models were not. Future work should investigate aspects of model training procedures and dataset collection that influence generalization in the presence of data distribution shifts.","PeriodicalId":87342,"journal":{"name":"Proceedings of the ACM Conference on Health, Inference, and Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81221608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
CheXtransfer: performance and parameter efficiency of ImageNet models for chest X-Ray interpretation CheXtransfer: ImageNet模型用于胸部x线解译的性能和参数效率
Proceedings of the ACM Conference on Health, Inference, and Learning Pub Date : 2021-01-18 DOI: 10.1145/3450439.3451867
Alexander Ke, William Ellsworth, O. Banerjee, A. Ng, P. Rajpurkar
{"title":"CheXtransfer: performance and parameter efficiency of ImageNet models for chest X-Ray interpretation","authors":"Alexander Ke, William Ellsworth, O. Banerjee, A. Ng, P. Rajpurkar","doi":"10.1145/3450439.3451867","DOIUrl":"https://doi.org/10.1145/3450439.3451867","url":null,"abstract":"Deep learning methods for chest X-ray interpretation typically rely on pretrained models developed for ImageNet. This paradigm assumes that better ImageNet architectures perform better on chest X-ray tasks and that ImageNet-pretrained weights provide a performance boost over random initialization. In this work, we compare the transfer performance and parameter efficiency of 16 popular convolutional architectures on a large chest X-ray dataset (CheXpert) to investigate these assumptions. First, we find no relationship between ImageNet performance and CheXpert performance for both models without pretraining and models with pretraining. Second, we find that, for models without pretraining, the choice of model family influences performance more than size within a family for medical imaging tasks. Third, we observe that ImageNet pretraining yields a statistically significant boost in performance across architectures, with a higher boost for smaller architectures. Fourth, we examine whether ImageNet architectures are unnecessarily large for CheXpert by truncating final blocks from pretrained models, and find that we can make models 3.25x more parameter-efficient on average without a statistically significant drop in performance. Our work contributes new experimental evidence about the relation of ImageNet to chest x-ray interpretation performance.","PeriodicalId":87342,"journal":{"name":"Proceedings of the ACM Conference on Health, Inference, and Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73858761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 81
ACM CHIL '21: ACM Conference on Health, Inference, and Learning, Virtual Event, USA, April 8-9, 2021 ACM CHIL '21: ACM健康、推理和学习会议,虚拟事件,美国,2021年4月8日至9日
{"title":"ACM CHIL '21: ACM Conference on Health, Inference, and Learning, Virtual Event, USA, April 8-9, 2021","authors":"","doi":"10.1145/3450439","DOIUrl":"https://doi.org/10.1145/3450439","url":null,"abstract":"","PeriodicalId":87342,"journal":{"name":"Proceedings of the ACM Conference on Health, Inference, and Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75547599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
iGOS++: integrated gradient optimized saliency by bilateral perturbations igos++:双侧扰动下的积分梯度优化显著性
Proceedings of the ACM Conference on Health, Inference, and Learning Pub Date : 2021-01-01 DOI: 10.1145/3450439.3451865
S. Khorram, T. Lawson, Fuxin Li
{"title":"iGOS++: integrated gradient optimized saliency by bilateral perturbations","authors":"S. Khorram, T. Lawson, Fuxin Li","doi":"10.1145/3450439.3451865","DOIUrl":"https://doi.org/10.1145/3450439.3451865","url":null,"abstract":"The black-box nature of the deep networks makes the explanation for \"why\" they make certain predictions extremely challenging. Saliency maps are one of the most widely-used local explanation tools to alleviate this problem. One of the primary approaches for generating saliency maps is by optimizing for a mask over the input dimensions so that the output of the network for a given class is influenced the most. However, prior work only studies such influence by removing evidence from the input. In this paper, we present iGOS++, a framework to generate saliency maps for blackbox networks by considering both removal and preservation of evidence. Additionally, we introduce the bilateral total variation term to the optimization that improves the continuity of the saliency map especially under high resolution and with thin object parts. We validate the capabilities of iGOS++ by extensive experiments and comparison against state-of-the-art saliency map methods. Our results show significant improvement in locating salient regions that are directly interpretable by humans. Besides, we showcased the capabilities of our method, iGOS++, in a real-world application of AI on medical data: the task of classifying COVID-19 cases from x-ray images. To our surprise, we discovered that sometimes the classifier is overfitted to the text characters printed on the x-ray images when performing classification rather than focusing on the evidence in the lungs. Fixing this overfitting issue by data cleansing significantly improved the precision and recall of the classifier.","PeriodicalId":87342,"journal":{"name":"Proceedings of the ACM Conference on Health, Inference, and Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81277946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Concept-based model explanations for electronic health records 基于概念的电子健康记录模型解释
Proceedings of the ACM Conference on Health, Inference, and Learning Pub Date : 2020-12-03 DOI: 10.1145/3450439.3451858
Sebastien Baur, Shaobo Hou, Eric Loreaux, Diana Mincu, A. Mottram, Ivan V. Protsyuk, Nenad Tomašev, Martin G. Seneviratne, Alan Karthikesanlingam, J. Schrouff
{"title":"Concept-based model explanations for electronic health records","authors":"Sebastien Baur, Shaobo Hou, Eric Loreaux, Diana Mincu, A. Mottram, Ivan V. Protsyuk, Nenad Tomašev, Martin G. Seneviratne, Alan Karthikesanlingam, J. Schrouff","doi":"10.1145/3450439.3451858","DOIUrl":"https://doi.org/10.1145/3450439.3451858","url":null,"abstract":"Recurrent Neural Networks (RNNs) are often used for sequential modeling of adverse outcomes in electronic health records (EHRs) due to their ability to encode past clinical states. These deep, recurrent architectures have displayed increased performance compared to other modeling approaches in a number of tasks, fueling the interest in deploying deep models in clinical settings. One of the key elements in ensuring safe model deployment and building user trust is model explainability. Testing with Concept Activation Vectors (TCAV) has recently been introduced as a way of providing human-understandable explanations by comparing high-level concepts to the network's gradients. While the technique has shown promising results in real-world imaging applications, it has not been applied to structured temporal inputs. To enable an application of TCAV to sequential predictions in the EHR, we propose an extension of the method to time series data. We evaluate the proposed approach on an open EHR benchmark from the intensive care unit, as well as synthetic data where we are able to better isolate individual effects.","PeriodicalId":87342,"journal":{"name":"Proceedings of the ACM Conference on Health, Inference, and Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89243679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Self-supervised transfer learning of physiological representations from free-living wearable data 来自自由生活的可穿戴数据的生理表征的自监督迁移学习
Proceedings of the ACM Conference on Health, Inference, and Learning Pub Date : 2020-11-18 DOI: 10.1145/3450439.3451863
Dimitris Spathis, I. Perez-Pozuelo, S. Brage, N. Wareham, C. Mascolo
{"title":"Self-supervised transfer learning of physiological representations from free-living wearable data","authors":"Dimitris Spathis, I. Perez-Pozuelo, S. Brage, N. Wareham, C. Mascolo","doi":"10.1145/3450439.3451863","DOIUrl":"https://doi.org/10.1145/3450439.3451863","url":null,"abstract":"Wearable devices such as smartwatches are becoming increasingly popular tools for objectively monitoring physical activity in free-living conditions. To date, research has primarily focused on the purely supervised task of human activity recognition, demonstrating limited success in inferring high-level health outcomes from low-level signals. Here, we present a novel self-supervised representation learning method using activity and heart rate (HR) signals without semantic labels. With a deep neural network, we set HR responses as the supervisory signal for the activity data, leveraging their underlying physiological relationship. In addition, we propose a custom quantile loss function that accounts for the long-tailed HR distribution present in the general population. We evaluate our model in the largest free-living combined-sensing dataset (comprising >280k hours of wrist accelerometer & wearable ECG data). Our contributions are two-fold: i) the pre-training task creates a model that can accurately forecast HR based only on cheap activity sensors, and ii) we leverage the information captured through this task by proposing a simple method to aggregate the learnt latent representations (embeddings) from the window-level to user-level. Notably, we show that the embeddings can generalize in various downstream tasks through transfer learning with linear classifiers, capturing physiologically meaningful, personalized information. For instance, they can be used to predict variables associated with individuals' health, fitness and demographic characteristics (AUC >70), outperforming unsupervised autoencoders and common bio-markers. Overall, we propose the first multimodal self-supervised method for behavioral and physiological data with implications for large-scale health and lifestyle monitoring. Code: https://github.com/sdimi/Step2heart.","PeriodicalId":87342,"journal":{"name":"Proceedings of the ACM Conference on Health, Inference, and Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80007425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Phenotypical ontology driven framework for multi-task learning 多任务学习的表型本体驱动框架
Proceedings of the ACM Conference on Health, Inference, and Learning Pub Date : 2020-09-04 DOI: 10.1145/3450439.3451881
Mohamed F. Ghalwash, Zijun Yao, P. Chakraborty, James Codella, D. Sow
{"title":"Phenotypical ontology driven framework for multi-task learning","authors":"Mohamed F. Ghalwash, Zijun Yao, P. Chakraborty, James Codella, D. Sow","doi":"10.1145/3450439.3451881","DOIUrl":"https://doi.org/10.1145/3450439.3451881","url":null,"abstract":"Despite the large number of patients in Electronic Health Records (EHRs), the subset of usable data for modeling outcomes of specific phenotypes are often imbalanced and of modest size. This can be attributed to the uneven coverage of medical concepts in EHRs. We propose OMTL, an Ontology-driven Multi-Task Learning framework, that is designed to overcome such data limitations.The key contribution of our work is the effective use of knowledge from a predefined well-established medical relationship graph (ontology) to construct a novel deep learning network architecture that mirrors this ontology. This enables common representations to be shared across related phenotypes, and was found to improve the learning performance. The proposed OMTL naturally allows for multi-task learning of different phenotypes on distinct predictive tasks. These phenotypes are tied together by their semantic relationship according to the external medical ontology. Using the publicly available MIMIC-III database, we evaluate OMTL and demonstrate its efficacy on several real patient outcome predictions over state-of-the-art multi-task learning schemes. The results of evaluating the proposed approach on six experiments show improvement in the area under ROC curve by 9% and by 8% in the area under precision-recall curve.","PeriodicalId":87342,"journal":{"name":"Proceedings of the ACM Conference on Health, Inference, and Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84508295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Temporal pointwise convolutional networks for length of stay prediction in the intensive care unit 用于重症监护病房住院时间预测的时间点卷积网络
Proceedings of the ACM Conference on Health, Inference, and Learning Pub Date : 2020-07-18 DOI: 10.1145/3450439.3451860
Emma Rocheteau, P. Lio’, Stephanie L. Hyland
{"title":"Temporal pointwise convolutional networks for length of stay prediction in the intensive care unit","authors":"Emma Rocheteau, P. Lio’, Stephanie L. Hyland","doi":"10.1145/3450439.3451860","DOIUrl":"https://doi.org/10.1145/3450439.3451860","url":null,"abstract":"The pressure of ever-increasing patient demand and budget restrictions make hospital bed management a daily challenge for clinical staff. Most critical is the efficient allocation of resource-heavy Intensive Care Unit (ICU) beds to the patients who need life support. Central to solving this problem is knowing for how long the current set of ICU patients are likely to stay in the unit. In this work, we propose a new deep learning model based on the combination of temporal convolution and pointwise (1x1) convolution, to solve the length of stay prediction task on the eICU and MIMIC-IV critical care datasets. The model - which we refer to as Temporal Pointwise Convolution (TPC) - is specifically designed to mitigate common challenges with Electronic Health Records, such as skewness, irregular sampling and missing data. In doing so, we have achieved significant performance benefits of 18-68% (metric and dataset dependent) over the commonly used Long-Short Term Memory (LSTM) network, and the multi-head self-attention network known as the Transformer. By adding mortality prediction as a side-task, we can improve performance further still, resulting in a mean absolute deviation of 1.55 days (eICU) and 2.28 days (MIMIC-IV) on predicting remaining length of stay.","PeriodicalId":87342,"journal":{"name":"Proceedings of the ACM Conference on Health, Inference, and Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82370769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
BMM-Net: automatic segmentation of edema in optical coherence tomography based on boundary detection and multi-scale network BMM-Net:基于边界检测和多尺度网络的光学相干层析成像水肿自动分割
Proceedings of the ACM Conference on Health, Inference, and Learning Pub Date : 2020-04-02 DOI: 10.1145/3368555.3384447
Ruru Zhang, Jiawen He, Shenda Shi, E. Haihong, Zhonghong Ou, Meina Song
{"title":"BMM-Net: automatic segmentation of edema in optical coherence tomography based on boundary detection and multi-scale network","authors":"Ruru Zhang, Jiawen He, Shenda Shi, E. Haihong, Zhonghong Ou, Meina Song","doi":"10.1145/3368555.3384447","DOIUrl":"https://doi.org/10.1145/3368555.3384447","url":null,"abstract":"Retinal effusions and cysts caused by the leakage of damaged macular vessels and choroid neovascularization are symptoms of many ophthalmic diseases. Optical coherence tomography (OCT), which provides clear 10-layer cross-sectional images of the retina, is widely used to screen various ophthalmic diseases. A large number of researchers have carried out relevant studies on deep learning technology to realize the semantic segmentation of lesion areas, such as effusion on OCT images, and achieved good results. However, in this field, problems of the low contrast of the lesion area and unevenness of lesion size limit the accuracy of the deep learning semantic segmentation model. In this paper, we propose a boundary multi-scale multi-task OCT segmentation network (BMM-Net) for these two challenges to segment the retinal edema area, subretinal fluid, and pigment epithelial detachment in OCT images. We propose a boundary extraction module, a multi-scale information perception module, and a classification module to capture accurate position and semantic information and collaboratively extract meaningful features. We train and verify on the AI Challenger competition dataset. The average Dice coefficient of the three lesion areas is 3.058% higher than the most commonly used model in the field of medical image segmentation and reaches 0.8222.","PeriodicalId":87342,"journal":{"name":"Proceedings of the ACM Conference on Health, Inference, and Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90166538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Using SNOMED to automate clinical concept mapping 使用SNOMED自动化临床概念映射
Proceedings of the ACM Conference on Health, Inference, and Learning Pub Date : 2020-04-02 DOI: 10.1145/3368555.3384453
Shaun Gupta, Frederik Dieleman, P. Long, O. Doyle, N. Leavitt
{"title":"Using SNOMED to automate clinical concept mapping","authors":"Shaun Gupta, Frederik Dieleman, P. Long, O. Doyle, N. Leavitt","doi":"10.1145/3368555.3384453","DOIUrl":"https://doi.org/10.1145/3368555.3384453","url":null,"abstract":"The International Classification of Disease (ICD) is a widely used diagnostic ontology for the classification of health disorders and a valuable resource for healthcare analytics. However, ICD is an evolving ontology and subject to periodic revisions (e.g. ICD-9-CM to ICD-10-CM) resulting in the absence of complete cross-walks between versions. While clinical experts can create custom mappings across ICD versions, this process is both time-consuming and costly. We propose an automated solution that facilitates interoperability without sacrificing accuracy. Our solution leverages the SNOMED-CT ontology whereby medical concepts are organised in a directed acyclic graph. We use this to map ICD-9-CM to ICD-10-CM by associating codes to clinical concepts in the SNOMED graph using a nearest neighbors search in combination with natural language processing. To assess the impact of our method, the performance of a gradient boosted tree (XGBoost) developed to classify patients with Exocrine Pancreatic Insufficiency (EPI) disorder, was compared when using features constructed by our solution versus clinically-driven methods. This dataset comprised of 23, 204 EPI patients and 277, 324 non-EPI patients with data spanning from October 2011 to April 2017. Our algorithm generated clinical predictors with comparable stability across the ICD-9-CM to ICD-10-CM transition point when compared to ICD-9-CM/ICD-10-CM mappings generated by clinical experts. Preliminary modeling results showed highly similar performance for models based on the SNOMED mapping vs clinically defined mapping (71% precision at 20% recall for both models). Overall, the framework does not compromise on accuracy at the individual code level or at the model-level while obviating the need for time-consuming manual mapping.","PeriodicalId":87342,"journal":{"name":"Proceedings of the ACM Conference on Health, Inference, and Learning","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80046647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信