Radiology-Artificial Intelligence最新文献

筛选
英文 中文
Single Inspiratory Chest CT-based Generative Deep Learning Models to Evaluate Functional Small Airways Disease. 基于单吸气胸部ct的生成深度学习模型评估功能性小气道疾病。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-09-01 DOI: 10.1148/ryai.240680
Di Zhang, Mingyue Zhao, Xiuxiu Zhou, Yiwei Li, Yu Guan, Yi Xia, Jin Zhang, Qi Dai, Jingfeng Zhang, Li Fan, S Kevin Zhou, Shiyuan Liu
{"title":"Single Inspiratory Chest CT-based Generative Deep Learning Models to Evaluate Functional Small Airways Disease.","authors":"Di Zhang, Mingyue Zhao, Xiuxiu Zhou, Yiwei Li, Yu Guan, Yi Xia, Jin Zhang, Qi Dai, Jingfeng Zhang, Li Fan, S Kevin Zhou, Shiyuan Liu","doi":"10.1148/ryai.240680","DOIUrl":"10.1148/ryai.240680","url":null,"abstract":"<p><p>Purpose To develop a deep learning model that uses a single inspiratory chest CT scan to perform parametric response mapping (PRM) and predict functional small airways disease (fSAD). Materials and Methods In this retrospective study, predictive and generative deep learning models for PRM using inspiratory chest CT were developed using a model development dataset with fivefold cross-validation, with PRM derived from paired respiratory CT as the reference standard. Voxelwise metrics, including sensitivity, area under the receiver operating characteristic curve (AUC), and structural similarity index measure, were used to evaluate model performance in predicting PRM and generating expiratory CT images. The best-performing model was tested on three internal test sets and an external test set. Results The model development dataset of 308 individuals (median age, 67 years [IQR: 62-70 years]; 113 female) was divided into the training set (<i>n</i> = 216), the internal validation set (<i>n</i> = 31), and the first internal test set (<i>n</i> = 61). The generative model outperformed the predictive model in detecting fSAD (sensitivity, 86.3% vs 38.9%; AUC, 0.86 vs 0.70). The generative model performed well in the second internal (AUCs of 0.64, 0.84, and 0.97 for emphysema, fSAD, and normal lung tissue, respectively), the third internal (AUCs of 0.63, 0.83, and 0.97), and the external (AUCs of 0.58, 0.85, and 0.94) test sets. Notably, the model exhibited exceptional performance in the preserved ratio impaired spirometry group of the fourth internal test set (AUCs of 0.62, 0.88, and 0.96). Conclusion The proposed generative model, using a single inspiratory CT scan, outperformed existing algorithms in PRM evaluation and achieved comparable results to paired respiratory CT. <b>Keywords:</b> CT, Lung, Chronic Obstructive Pulmonary Disease, Diagnosis, Reconstruction Algorithms, Deep Learning, Parametric Response Mapping, X-ray Computed Tomography, Small Airways <i>Supplemental material is available for this article.</i> © The Author(s) 2025. Published by the Radiological Society of North America under a CC BY 4.0 license. See also the commentary by Hathaway and Singh in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240680"},"PeriodicalIF":13.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144643706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing Federated Learning Configurations for MRI Prostate Segmentation and Cancer Detection: A Simulation Study. 优化联邦学习配置的MRI前列腺分割和癌症检测:模拟研究。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-09-01 DOI: 10.1148/ryai.240485
Ashkan Moradi, Fadila Zerka, Joeran Sander Bosma, Mohammed R S Sunoqrot, Bendik S Abrahamsen, Derya Yakar, Jeroen Geerdink, Henkjan Huisman, Tone Frost Bathen, Mattijs Elschot
{"title":"Optimizing Federated Learning Configurations for MRI Prostate Segmentation and Cancer Detection: A Simulation Study.","authors":"Ashkan Moradi, Fadila Zerka, Joeran Sander Bosma, Mohammed R S Sunoqrot, Bendik S Abrahamsen, Derya Yakar, Jeroen Geerdink, Henkjan Huisman, Tone Frost Bathen, Mattijs Elschot","doi":"10.1148/ryai.240485","DOIUrl":"10.1148/ryai.240485","url":null,"abstract":"<p><p>Purpose To develop and optimize a federated learning (FL) framework across multiple clients for biparametric MRI prostate segmentation and clinically significant prostate cancer (csPCa) detection. Materials and Methods A retrospective study was conducted using Flower FL (Flower.ai) to train a nnU-Net-based architecture for MRI prostate segmentation and csPCa detection using data collected from January 2010 to August 2021. Model development included training and optimizing local epochs, federated rounds, and aggregation strategies for FL-based prostate segmentation on T2-weighted MR images (four clients, 1294 patients) and csPCa detection using biparametric MR images (three clients, 1440 patients). Performance was evaluated on independent test sets using the Dice score for segmentation and the Prostate Imaging: Cancer Artificial Intelligence (PI-CAI) score, defined as the average of the area under the receiver operating characteristic curve and average precision, for csPCa detection. <i>P</i> values for performance differences were calculated using permutation testing. Results The FL configurations were independently optimized for both tasks, showing improved performance at 1 epoch (300 rounds) using FedMedian for prostate segmentation and 5 epochs (200 rounds) using FedAdagrad, for csPCa detection. Compared with the average performance of the clients, the optimized FL model significantly improved performance in prostate segmentation (Dice score, increase from 0.73 ± 0.06 [SD] to 0.88 ± 0.03; <i>P</i> ≤ .01) and csPCa detection (PI-CAI score, increase from 0.63 ± 0.07 to 0.74 ± 0.06; <i>P</i> ≤ .01) on the independent test set. The optimized FL model showed higher lesion detection performance compared with the FL-baseline model (PI-CAI score, increase from 0.72 ± 0.06 to 0.74 ± 0.06; <i>P</i> ≤ .01), but no evidence of a difference was observed for prostate segmentation (Dice scores, 0.87 ± 0.03 vs 0.88 ± 03; <i>P</i> > .05). Conclusion FL enhanced the performance and generalizability of MRI prostate segmentation and csPCa detection compared with local models, and optimizing its configuration further improved lesion detection performance. <b>Keywords:</b> Federated Learning, Prostate Cancer, MRI, Cancer Detection, Deep Learning <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240485"},"PeriodicalIF":13.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144745346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing Early Detection of Chronic Obstructive Pulmonary Disease Using Generative AI. 利用生成式人工智能推进慢性阻塞性肺疾病的早期检测。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-09-01 DOI: 10.1148/ryai.250555
Quincy A Hathaway, Yashbir Singh
{"title":"Advancing Early Detection of Chronic Obstructive Pulmonary Disease Using Generative AI.","authors":"Quincy A Hathaway, Yashbir Singh","doi":"10.1148/ryai.250555","DOIUrl":"10.1148/ryai.250555","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 5","pages":"e250555"},"PeriodicalIF":13.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144971989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prediction of Early Neoadjuvant Chemotherapy Response of Breast Cancer through Deep Learning-based Pharmacokinetic Quantification of DCE MRI. 基于深度学习的DCE MRI药代动力学量化预测乳腺癌早期新辅助化疗反应
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-09-01 DOI: 10.1148/ryai.240769
Chaowei Wu, Lixia Wang, Nan Wang, Stephen Shiao, Tai Dou, Yin-Chen Hsu, Anthony G Christodoulou, Yibin Xie, Debiao Li
{"title":"Prediction of Early Neoadjuvant Chemotherapy Response of Breast Cancer through Deep Learning-based Pharmacokinetic Quantification of DCE MRI.","authors":"Chaowei Wu, Lixia Wang, Nan Wang, Stephen Shiao, Tai Dou, Yin-Chen Hsu, Anthony G Christodoulou, Yibin Xie, Debiao Li","doi":"10.1148/ryai.240769","DOIUrl":"10.1148/ryai.240769","url":null,"abstract":"<p><p>Purpose To improve the generalizability of pathologic complete response prediction following neoadjuvant chemotherapy using deep learning-based retrospective pharmacokinetic quantification of early treatment dynamic contrast-enhanced MRI. Materials and Methods This multicenter retrospective study included breast MRI data from four publicly available datasets of patients with breast cancer acquired from May 2002 to November 2016. Pharmacokinetic quantification was performed using a previously developed deep learning model for clinical multiphasic dynamic contrast-enhanced MRI datasets. Radiomic analysis was performed on pharmacokinetic quantification maps and conventional enhancement maps. These data, together with clinicopathologic variables and shape-based radiomic analysis, were subsequently applied for pathologic complete response prediction using logistic regression. Prediction performance was evaluated by area under the receiver operating characteristic curve (AUC). Results A total of 1073 female patients with breast cancer were included. The proposed method showed improved consistency and generalizability compared with the reference method, achieving higher AUC values across external datasets (0.82 [95% CI: 0.72, 0.91], 0.75 [95% CI: 0.71, 0.79], and 0.77 [95% CI: 0.66, 0.86] for datasets A2, B, and C, respectively). For dataset A2 (from the same study as the training dataset), there was no significant difference in performance between the proposed method and reference method (<i>P</i> = .80). Notably, on the combined external datasets, the proposed method significantly outperformed the reference method (AUC, 0.75 [95% CI: 0.72, 0.79] vs AUC, 0.71 [95% CI: 0.68, 0.76]; <i>P</i> = .003). Conclusion This work offers an approach to improve the generalizability and predictive accuracy of pathologic complete response for breast cancer across diverse datasets, achieving higher and more consistent AUC scores than existing methods. <b>Keywords:</b> Tumor Response, Breast, Prognosis, Dynamic Contrast-enhanced MRI <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Schnitzler in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240769"},"PeriodicalIF":13.2,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12464716/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144592462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Evolution of Radiology Image Annotation in the Era of Large Language Models. 大语言模型时代放射学图像标注的演变
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-07-01 DOI: 10.1148/ryai.240631
Adam E Flanders, Xindi Wang, Carol C Wu, Felipe C Kitamura, George Shih, John Mongan, Yifan Peng
{"title":"The Evolution of Radiology Image Annotation in the Era of Large Language Models.","authors":"Adam E Flanders, Xindi Wang, Carol C Wu, Felipe C Kitamura, George Shih, John Mongan, Yifan Peng","doi":"10.1148/ryai.240631","DOIUrl":"10.1148/ryai.240631","url":null,"abstract":"<p><p>Although there are relatively few diverse, high-quality medical imaging datasets on which to train computer vision artificial intelligence models, even fewer datasets contain expertly classified observations that can be repurposed to train or test such models. The traditional annotation process is laborious and time-consuming. Repurposing annotations and consolidating similar types of annotations from disparate sources has never been practical. Until recently, the use of natural language processing to convert a clinical radiology report into labels required custom training of a language model for each use case. Newer technologies such as large language models have made it possible to generate accurate and normalized labels at scale, using only clinical reports and specific prompt engineering. The combination of automatically generated labels extracted and normalized from reports in conjunction with foundational image models provides a means to create labels for model training. This article provides a short history and review of the annotation and labeling process of medical images, from the traditional manual methods to the newest semiautomated methods that provide a more scalable solution for creating useful models more efficiently. <b>Keywords:</b> Feature Detection, Diagnosis, Semi-supervised Learning © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240631"},"PeriodicalIF":13.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12319696/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144048059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unlocking Robust Segmentation: Decoding Domain Randomization for Radiologists. 解锁鲁棒分割:解码领域随机化放射科医生。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-07-01 DOI: 10.1148/ryai.250384
John D Mayfield
{"title":"Unlocking Robust Segmentation: Decoding Domain Randomization for Radiologists.","authors":"John D Mayfield","doi":"10.1148/ryai.250384","DOIUrl":"10.1148/ryai.250384","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 4","pages":"e250384"},"PeriodicalIF":13.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144592463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cybersecurity Threats and Mitigation Strategies for Large Language Models in Health Care. 医疗保健中大型语言模型的网络安全威胁和缓解策略。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-07-01 DOI: 10.1148/ryai.240739
Tugba Akinci D'Antonoli, Ali S Tejani, Bardia Khosravi, Christian Bluethgen, Felix Busch, Keno K Bressem, Lisa C Adams, Mana Moassefi, Shahriar Faghani, Judy Wawira Gichoya
{"title":"Cybersecurity Threats and Mitigation Strategies for Large Language Models in Health Care.","authors":"Tugba Akinci D'Antonoli, Ali S Tejani, Bardia Khosravi, Christian Bluethgen, Felix Busch, Keno K Bressem, Lisa C Adams, Mana Moassefi, Shahriar Faghani, Judy Wawira Gichoya","doi":"10.1148/ryai.240739","DOIUrl":"10.1148/ryai.240739","url":null,"abstract":"<p><p>The integration of large language models (LLMs) into health care offers tremendous opportunities to improve medical practice and patient care. Besides being susceptible to biases and threats common to all artificial intelligence (AI) systems, LLMs pose unique cybersecurity risks that must be carefully evaluated before these AI models are deployed in health care. LLMs can be exploited in several ways, such as malicious attacks, privacy breaches, and unauthorized manipulation of patient data. Moreover, malicious actors could use LLMs to infer sensitive patient information from training data. Furthermore, manipulated or poisoned data fed into these models could change their results in a way that is beneficial for the malicious actors. This report presents the cybersecurity challenges posed by LLMs in health care and provides strategies for mitigation. By implementing robust security measures and adhering to best practices during the model development, training, and deployment stages, stakeholders can help minimize these risks and protect patient privacy. <b>Keywords:</b> Computer Applications-General (Informatics), Application Domain, Large Language Models, Artificial Intelligence, Cybersecurity © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240739"},"PeriodicalIF":13.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143999117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning with Domain Randomization in Image and Feature Spaces for Abdominal Multiorgan Segmentation on CT and MRI Scans. 基于图像和特征空间随机化的深度学习用于腹部CT和MRI多器官分割。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-07-01 DOI: 10.1148/ryai.240586
Yu Shi, Lixia Wang, Touseef Ahmad Qureshi, Zengtian Deng, Yibin Xie, Debiao Li
{"title":"Deep Learning with Domain Randomization in Image and Feature Spaces for Abdominal Multiorgan Segmentation on CT and MRI Scans.","authors":"Yu Shi, Lixia Wang, Touseef Ahmad Qureshi, Zengtian Deng, Yibin Xie, Debiao Li","doi":"10.1148/ryai.240586","DOIUrl":"10.1148/ryai.240586","url":null,"abstract":"<p><p>Purpose To develop a deep learning segmentation model that can segment abdominal organs on CT and MRI scans with high accuracy and generalization ability. Materials and Methods In this study, an extended nnU-Net model was trained for abdominal organ segmentation. A domain randomization method in both the image and feature space was developed to improve the generalization ability under cross-site and cross-modality settings on public prostate MRI and abdominal CT and MRI datasets. The prostate MRI dataset contains data from multiple health care institutions, with domain shifts. The abdominal CT and MRI dataset is structured for cross-modality evaluation: training on one modality (eg, MRI) and testing on the other (eg, CT). This domain randomization method was then used to train a segmentation model with enhanced generalization ability on the abdominal multiorgan segmentation challenge dataset to improve abdominal CT and MR multiorgan segmentation, and the model was compared with two commonly used segmentation algorithms (TotalSegmentator and MRSegmentator). Model performance was evaluated using the Dice similarity coefficient (DSC). Results The proposed domain randomization method showed improved generalization ability on the cross-site and cross-modality datasets compared with the state-of-the-art methods. The segmentation model using this method outperformed two other publicly available segmentation models on data from unseen test domains (mean DSC, 0.88 vs 0.79 [<i>P</i> < .001] and 0.88 vs 0.76 [<i>P</i> < .001]). Conclusion The combination of image and feature domain randomizations improved the accuracy and generalization ability of deep learning-based abdominal segmentation on CT and MR images. <b>Keywords:</b> Segmentation, CT, MR Imaging, Neural Networks, MRI, Domain Randomization <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Mayfield in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240586"},"PeriodicalIF":13.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12476582/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144112076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing the Trade-off between Privacy and Utility in Medical Imaging Federated Learning. 医学影像联合学习中隐私与效用的权衡优化。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-07-01 DOI: 10.1148/ryai.250434
Zekai Yu
{"title":"Optimizing the Trade-off between Privacy and Utility in Medical Imaging Federated Learning.","authors":"Zekai Yu","doi":"10.1148/ryai.250434","DOIUrl":"10.1148/ryai.250434","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 4","pages":"e250434"},"PeriodicalIF":13.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12319695/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144745348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI to Measure Nuchal Translucency: Improved Speed and Accuracy, but Is It Still Relevant? 人工智能测量颈部透明度:提高速度和准确性,但它仍然相关吗?
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-07-01 DOI: 10.1148/ryai.250231
Steven C Horii
{"title":"AI to Measure Nuchal Translucency: Improved Speed and Accuracy, but Is It Still Relevant?","authors":"Steven C Horii","doi":"10.1148/ryai.250231","DOIUrl":"10.1148/ryai.250231","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 4","pages":"e250231"},"PeriodicalIF":13.2,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144486217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书