Radiology-Artificial Intelligence最新文献

筛选
英文 中文
Improving Automated Hemorrhage Detection at Sparse-View CT via U-Net-based Artifact Reduction. 通过基于 U-Net 的伪影消除技术改进稀疏视图 CT 中的出血自动检测功能
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-07-01 DOI: 10.1148/ryai.230275
Johannes Thalhammer, Manuel Schultheiß, Tina Dorosti, Tobias Lasser, Franz Pfeiffer, Daniela Pfeiffer, Florian Schaff
{"title":"Improving Automated Hemorrhage Detection at Sparse-View CT via U-Net-based Artifact Reduction.","authors":"Johannes Thalhammer, Manuel Schultheiß, Tina Dorosti, Tobias Lasser, Franz Pfeiffer, Daniela Pfeiffer, Florian Schaff","doi":"10.1148/ryai.230275","DOIUrl":"10.1148/ryai.230275","url":null,"abstract":"<p><p>Purpose To explore the potential benefits of deep learning-based artifact reduction in sparse-view cranial CT scans and its impact on automated hemorrhage detection. Materials and Methods In this retrospective study, a U-Net was trained for artifact reduction on simulated sparse-view cranial CT scans in 3000 patients, obtained from a public dataset and reconstructed with varying sparse-view levels. Additionally, EfficientNet-B2 was trained on full-view CT data from 17 545 patients for automated hemorrhage detection. Detection performance was evaluated using the area under the receiver operating characteristic curve (AUC), with differences assessed using the DeLong test, along with confusion matrices. A total variation (TV) postprocessing approach, commonly applied to sparse-view CT, served as the basis for comparison. A Bonferroni-corrected significance level of .001/6 = .00017 was used to accommodate for multiple hypotheses testing. Results Images with U-Net postprocessing were better than unprocessed and TV-processed images with respect to image quality and automated hemorrhage detection. With U-Net postprocessing, the number of views could be reduced from 4096 (AUC: 0.97 [95% CI: 0.97, 0.98]) to 512 (0.97 [95% CI: 0.97, 0.98], <i>P</i> < .00017) and to 256 views (0.97 [95% CI: 0.96, 0.97], <i>P</i> < .00017) with a minimal decrease in hemorrhage detection performance. This was accompanied by mean structural similarity index measure increases of 0.0210 (95% CI: 0.0210, 0.0211) and 0.0560 (95% CI: 0.0559, 0.0560) relative to unprocessed images. Conclusion U-Net-based artifact reduction substantially enhanced automated hemorrhage detection in sparse-view cranial CT scans. <b>Keywords:</b> CT, Head/Neck, Hemorrhage, Diagnosis, Supervised Learning <i>Supplemental material is available for this article.</i> © RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11294955/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140877469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vision Transformer-based Deep Learning Models Accelerate Further Research for Predicting Neurosurgical Intervention. 基于视觉转换器的深度学习模型加速了预测神经外科干预的进一步研究。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-07-01 DOI: 10.1148/ryai.240117
Kengo Takahashi, Takuma Usuzaki, Ryusei Inamori
{"title":"Vision Transformer-based Deep Learning Models Accelerate Further Research for Predicting Neurosurgical Intervention.","authors":"Kengo Takahashi, Takuma Usuzaki, Ryusei Inamori","doi":"10.1148/ryai.240117","DOIUrl":"10.1148/ryai.240117","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11294944/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141307009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bridging Pixels to Genes. 连接像素与基因
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-07-01 DOI: 10.1148/ryai.240262
Mana Moassefi, Bradley J Erickson
{"title":"Bridging Pixels to Genes.","authors":"Mana Moassefi, Bradley J Erickson","doi":"10.1148/ryai.240262","DOIUrl":"10.1148/ryai.240262","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11294947/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141427771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Nicki Minaj to Neuroblastoma: What Rigorous Approaches to Rhythms and Radiomics Have in Common. 从 Nicki Minaj 到神经母细胞瘤:节奏和放射组学的严格方法有何共同之处?
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-07-01 DOI: 10.1148/ryai.240350
Nabile M Safdar, Alina Galaria
{"title":"From Nicki Minaj to Neuroblastoma: What Rigorous Approaches to Rhythms and Radiomics Have in Common.","authors":"Nabile M Safdar, Alina Galaria","doi":"10.1148/ryai.240350","DOIUrl":"10.1148/ryai.240350","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11294945/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141627888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Clinical, Cultural, Computational, and Regulatory Considerations to Deploy AI in Radiology: Perspectives of RSNA and MICCAI Experts. 在放射学中部署人工智能的临床、文化、计算和监管考虑因素:RSNA 和 MICCAI 专家的观点。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-07-01 DOI: 10.1148/ryai.240225
Marius George Linguraru, Spyridon Bakas, Mariam Aboian, Peter D Chang, Adam E Flanders, Jayashree Kalpathy-Cramer, Felipe C Kitamura, Matthew P Lungren, John Mongan, Luciano M Prevedello, Ronald M Summers, Carol C Wu, Maruf Adewole, Charles E Kahn
{"title":"Clinical, Cultural, Computational, and Regulatory Considerations to Deploy AI in Radiology: Perspectives of RSNA and MICCAI Experts.","authors":"Marius George Linguraru, Spyridon Bakas, Mariam Aboian, Peter D Chang, Adam E Flanders, Jayashree Kalpathy-Cramer, Felipe C Kitamura, Matthew P Lungren, John Mongan, Luciano M Prevedello, Ronald M Summers, Carol C Wu, Maruf Adewole, Charles E Kahn","doi":"10.1148/ryai.240225","DOIUrl":"10.1148/ryai.240225","url":null,"abstract":"<p><p>The Radiological Society of North of America (RSNA) and the Medical Image Computing and Computer Assisted Intervention (MICCAI) Society have led a series of joint panels and seminars focused on the present impact and future directions of artificial intelligence (AI) in radiology. These conversations have collected viewpoints from multidisciplinary experts in radiology, medical imaging, and machine learning on the current clinical penetration of AI technology in radiology and how it is impacted by trust, reproducibility, explainability, and accountability. The collective points-both practical and philosophical-define the cultural changes for radiologists and AI scientists working together and describe the challenges ahead for AI technologies to meet broad approval. This article presents the perspectives of experts from MICCAI and RSNA on the clinical, cultural, computational, and regulatory considerations-coupled with recommended reading materials-essential to adopt AI technology successfully in radiology and, more generally, in clinical practice. The report emphasizes the importance of collaboration to improve clinical deployment, highlights the need to integrate clinical and medical imaging data, and introduces strategies to ensure smooth and incentivized integration. <b>Keywords:</b> Adults and Pediatrics, Computer Applications-General (Informatics), Diagnosis, Prognosis © RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11294958/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141564666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing Performance of Transformer-based Models for Fetal Brain MR Image Segmentation. 优化基于变压器模型的胎儿脑磁共振图像分割性能
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-06-26 DOI: 10.1148/ryai.230229
Nicolò Pecco, Pasquale Anthony Della Rosa, Matteo Canini, Gianluca Nocera, Paola Scifo, Paolo Ivo Cavoretto, Massimo Candiani, Andrea Falini, Antonella Castellano, Cristina Baldoli
{"title":"Optimizing Performance of Transformer-based Models for Fetal Brain MR Image Segmentation.","authors":"Nicolò Pecco, Pasquale Anthony Della Rosa, Matteo Canini, Gianluca Nocera, Paola Scifo, Paolo Ivo Cavoretto, Massimo Candiani, Andrea Falini, Antonella Castellano, Cristina Baldoli","doi":"10.1148/ryai.230229","DOIUrl":"https://doi.org/10.1148/ryai.230229","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To test transformer-based models' performance when manipulating pretraining weights, dataset size, input size and comparing the best-model with reference standard and state-of-the-art models for a resting-state functional (rs-fMRI) fetal brain extraction task. Materials and Methods An internal retrospective dataset (fetuses = 172; images = 519; collected from 2018-2022) was used to investigate influence of dataset size, pretraining approaches and image input size on Swin-UNETR and UNETR models. The internal and an external (fetuses = 131; images = 561) datasets were used to cross-validate and to assess generalization capability of the best model against state-of-the-art models on different scanner types and number of gestational weeks (GW). The Dice similarity coefficient (DSC) and the Balanced average Hausdorff distance (BAHD) were used as segmentation performance metrics. GEE multifactorial models were used to assess significant model and interaction effects of interest. Results Swin-UNETR was not affected by pretraining approach and dataset size and performed best with the mean dataset image size, with a mean DSC of 0.92 and BAHD of 0.097. The Swin-UNETR was not affected by scanner type. Generalization results on the internal dataset showed that Swin-UNETR had lower performances compared with reference standard models and comparable performances on the external dataset. Cross-validation on internal and external test sets demonstrated better and comparable performance of Swin-UNETR versus convolutional neural network architectures during the late-fetal period (GWs > 25) but lower performance during the midfetal period (GWs ≤ 25). Conclusion Swin-UNTER showed flexibility in dealing with smaller datasets, regardless of pretraining approaches. For fetal brain extraction of rs-fMRI, Swin-UNTER showed comparable performance with reference standard models during the late-fetal period and lower performance during the early GW period. ©RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141451658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance of an Artificial Intelligence System for Breast Cancer Detection on Screening Mammograms from BreastScreen Norway. 挪威 BreastScreen 乳腺癌筛查乳房 X 线照片的人工智能乳腺癌检测系统性能。
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2024-05-01 DOI: 10.1148/ryai.230375
Marthe Larsen, Camilla F Olstad, Christoph I Lee, Tone Hovda, Solveig R Hoff, Marit A Martiniussen, Karl Øyvind Mikalsen, Håkon Lund-Hanssen, Helene S Solli, Marko Silberhorn, Åse Ø Sulheim, Steinar Auensen, Jan F Nygård, Solveig Hofvind
{"title":"Performance of an Artificial Intelligence System for Breast Cancer Detection on Screening Mammograms from BreastScreen Norway.","authors":"Marthe Larsen, Camilla F Olstad, Christoph I Lee, Tone Hovda, Solveig R Hoff, Marit A Martiniussen, Karl Øyvind Mikalsen, Håkon Lund-Hanssen, Helene S Solli, Marko Silberhorn, Åse Ø Sulheim, Steinar Auensen, Jan F Nygård, Solveig Hofvind","doi":"10.1148/ryai.230375","DOIUrl":"10.1148/ryai.230375","url":null,"abstract":"<p><p>Purpose To explore the stand-alone breast cancer detection performance, at different risk score thresholds, of a commercially available artificial intelligence (AI) system. Materials and Methods This retrospective study included information from 661 695 digital mammographic examinations performed among 242 629 female individuals screened as a part of BreastScreen Norway, 2004-2018. The study sample included 3807 screen-detected cancers and 1110 interval breast cancers. A continuous examination-level risk score by the AI system was used to measure performance as the area under the receiver operating characteristic curve (AUC) with 95% CIs and cancer detection at different AI risk score thresholds. Results The AUC of the AI system was 0.93 (95% CI: 0.92, 0.93) for screen-detected cancers and interval breast cancers combined and 0.97 (95% CI: 0.97, 0.97) for screen-detected cancers. In a setting where 10% of the examinations with the highest AI risk scores were defined as positive and 90% with the lowest scores as negative, 92.0% (3502 of 3807) of the screen-detected cancers and 44.6% (495 of 1110) of the interval breast cancers were identified with AI. In this scenario, 68.5% (10 987 of 16 040) of false-positive screening results (negative recall assessment) were considered negative by AI. When 50% was used as the cutoff, 99.3% (3781 of 3807) of the screen-detected cancers and 85.2% (946 of 1110) of the interval breast cancers were identified as positive by AI, whereas 17.0% (2725 of 16 040) of the false-positive results were considered negative. Conclusion The AI system showed high performance in detecting breast cancers within 2 years of screening mammography and a potential for use to triage low-risk mammograms to reduce radiologist workload. <b>Keywords:</b> Mammography, Breast, Screening, Convolutional Neural Network (CNN), Deep Learning Algorithms <i>Supplemental material is available for this article</i>. © RSNA, 2024 See also commentary by Bahl and Do in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11140504/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140862082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Health Care: Decreasing MRI Scan Time. 高效的医疗保健:缩短磁共振成像扫描时间
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2024-05-01 DOI: 10.1148/ryai.240174
Farid GharehMohammadi, Ronnie A Sebro
{"title":"Efficient Health Care: Decreasing MRI Scan Time.","authors":"Farid GharehMohammadi, Ronnie A Sebro","doi":"10.1148/ryai.240174","DOIUrl":"10.1148/ryai.240174","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11140514/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140865263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-supervised Learning for Generalizable Intracranial Hemorrhage Detection and Segmentation. 用于颅内出血检测和分割的半监督学习。
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2024-05-01 DOI: 10.1148/ryai.230077
Emily Lin, Esther L Yuh
{"title":"Semi-supervised Learning for Generalizable Intracranial Hemorrhage Detection and Segmentation.","authors":"Emily Lin, Esther L Yuh","doi":"10.1148/ryai.230077","DOIUrl":"10.1148/ryai.230077","url":null,"abstract":"<p><p>Purpose To develop and evaluate a semi-supervised learning model for intracranial hemorrhage detection and segmentation on an out-of-distribution head CT evaluation set. Materials and Methods This retrospective study used semi-supervised learning to bootstrap performance. An initial \"teacher\" deep learning model was trained on 457 pixel-labeled head CT scans collected from one U.S. institution from 2010 to 2017 and used to generate pseudo labels on a separate unlabeled corpus of 25 000 examinations from the Radiological Society of North America and American Society of Neuroradiology. A second \"student\" model was trained on this combined pixel- and pseudo-labeled dataset. Hyperparameter tuning was performed on a validation set of 93 scans. Testing for both classification (<i>n</i> = 481 examinations) and segmentation (<i>n</i> = 23 examinations, or 529 images) was performed on CQ500, a dataset of 481 scans performed in India, to evaluate out-of-distribution generalizability. The semi-supervised model was compared with a baseline model trained on only labeled data using area under the receiver operating characteristic curve, Dice similarity coefficient, and average precision metrics. Results The semi-supervised model achieved a statistically significant higher examination area under the receiver operating characteristic curve on CQ500 compared with the baseline (0.939 [95% CI: 0.938, 0.940] vs 0.907 [95% CI: 0.906, 0.908]; <i>P</i> = .009). It also achieved a higher Dice similarity coefficient (0.829 [95% CI: 0.825, 0.833] vs 0.809 [95% CI: 0.803, 0.812]; <i>P</i> = .012) and pixel average precision (0.848 [95% CI: 0.843, 0.853]) vs 0.828 [95% CI: 0.817, 0.828]) compared with the baseline. Conclusion The addition of unlabeled data in a semi-supervised learning framework demonstrates stronger generalizability potential for intracranial hemorrhage detection and segmentation compared with a supervised baseline. <b>Keywords:</b> Semi-supervised Learning, Traumatic Brain Injury, CT, Machine Learning <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license. See also the commentary by Swimburne in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11140498/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140040505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Faster, More Practical, but Still Accurate: Deep Learning for Diagnosis of Progressive Supranuclear Palsy. 更快、更实用,但仍然准确:深度学习诊断进行性核上性麻痹。
IF 9.8
Radiology-Artificial Intelligence Pub Date : 2024-05-01 DOI: 10.1148/ryai.240181
Bahram Mohajer
{"title":"Faster, More Practical, but Still Accurate: Deep Learning for Diagnosis of Progressive Supranuclear Palsy.","authors":"Bahram Mohajer","doi":"10.1148/ryai.240181","DOIUrl":"10.1148/ryai.240181","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":9.8,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11140513/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140858206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信