放射学中的人工智能

D. Rubin
{"title":"放射学中的人工智能","authors":"D. Rubin","doi":"10.24875/jmexfri.m24000073","DOIUrl":null,"url":null,"abstract":"| Artificial intelligence (AI) algorithms, particularly deep learning, have demonstrated remarkable progress in imagerecognition tasks. Methods ranging from convolutional neural networks to variational autoencoders have found myriad applications in the medical image analysis field, propelling it forward at a rapid pace. Historically , in radiology practice, trained physicians visually assessed medical images for the detection, characterization and monitoring of diseases. AI methods excel at automatically recognizing complex patterns in imaging data and providing quantitative, rather than qualitative, assessments of radiographic characteristics. In this Opinion article, we establish a general understanding of AI methods, particularly those pertaining to imagebased tasks. We explore how these methods could impact multiple facets of radiology , with a general focus on applications in oncology , and demonstrate ways in which these methods are advancing the field. Finally , we discuss the challenges facing clinical implementation and provide our perspective on how the domain could be advanced. PErsPECTIvEs © 2018 Macmillan Publishers Limited, part of Springer Nature. All rights reserved. Nature reviews | CanCer AI in medical imaging The primary driver behind the emergence of AI in medical imaging has been the desire for greater efficacy and efficiency in clinical care. Radiological imaging data continues to grow at a disproportionate rate when compared with the number of available trained readers, and the decline in imaging reimbursements has forced healthcare providers to compensate by increasing productivity24. These factors have contributed to a dramatic increase in radiologists’ workloads. Studies report that, in some cases, an average radiologist must interpret one image every 3–4 seconds in an 8-hour workday to meet workload demands25. As radiology involves visual perception as well as decision making under uncertainty26, errors are inevitable — especially under such constrained conditions. A seamlessly integrated AI component within the imaging workflow would increase efficiency, reduce errors and achieve objectives with minimal manual input by providing trained radiologists with prescreened images and identified features. Therefore, substantial efforts and policies are being put forward to facilitate technological advances related to AI in medical imaging. Almost all imagebased radiology tasks are contingent upon the quantification and assessment of radiographic characteristics from images. These characteristics can be important for the clinical task at hand, that is, for the detection, characterization or monitoring of diseases. The application of logic and statistical pattern recognition to problems in medicine has been proposed since the early 1960s27,28. As computers became more prevalent in the 1980s, the AIpowered automation of many clinical tasks has shifted radiology from a perceptual subjective craft to a quantitatively computable domain29,30. The rate at which AI is evolving radiology is parallel to that in other application areas and is proportional to the rapid growth of data and computational power. There are two classes of AI methods that are in wide use today (Box 1; Fig. 2). The first uses handcrafted engineered features that are defined in terms of mathematical equations (such as tumour texture) and can thus be quantified using computer programs31. These features are used as inputs to state-ofthe-art machine learning models that are trained to classify patients in ways that can support clinical decision making. Although such features are perceived to be discriminative, they rely on expert definition and hence do not necessarily represent the most optimal feature quantification approach for the discrimination task at hand. Moreover, predefined features are often unable to adapt to variations in imaging modalities, such as computed tomography (CT), positron emission tomography (PET) and magnetic resonance imaging (MRI), and their associated signalto-noise characteristics. The second method, deep learning, has gained considerable attention in recent years. Deep learning algorithms can automatically learn feature representations from data without the need for prior definition by human experts. This datadriven approach allows for more abstract feature definitions, making it more informative and generalizable. Deep learning can thus automatically quantify phenotypic characteristics of human tissues32, promising substantial improvements in diagnosis and clinical care. Deep learning has the added benefit of reducing the need for manual preprocessing steps. For example, to extract predefined features, accurate segmentation of diseased tissues by experts is often needed33. Because deep learning is data driven (Box 1), with enough example data, it can automatically identify diseased tissues and hence avoid the need for expertdefined segmentations. Given its ability to learn complex data representations, deep learning is also often robust against undesired variation, such as the interreader variability, and can hence be applied to a large variety of clinical conditions and parameters. In many ways, deep learning can mirror what trained radiologists do, that is, identify image parameters but also weigh up the importance of these parameters on the basis of other factors to arrive at a clinical decision. Given the growing number of applications of deep learning in medical imaging14, several efforts have compared deep learning methods with their predefined featurebased counterparts and have reported substantial performance improvements with deep learning34,35. Studies have also shown that deep learning technologies are on par with radiologists’ performance for both detection36 and segmentation37 tasks in ultrasonography and MRI, respectively. For the classification tasks of lymph node metastasis in PET–CT, deep learning had higher sensitivities but lower specificities than radiologists38. As these methods are iteratively refined and tailored for specific applications, a better command of the sensitivity:specificity tradeoff is expected. Deep learning can also enable faster development times, as it depends solely on curated data and the corresponding metadata rather than domain expertise. On the other hand, traditional predefined feature systems have shown plateauing performance over recent years and hence do not generally meet the stringent requirements for clinical utility. As a result, only a few have been translated into the clinic39. It is expected that Box 1 | artificial intelligence methods in medical imaging Machine learning algorithms based on predefined engineered features traditional artificial intelligence (ai) methods rely largely on predefined engineered feature algorithms (Fig. 2a) with explicit parameters based on expert knowledge. such features are designed to quantify specific radiographic characteristics, such as the 3D shape of a tumour or the intratumoural texture and distribution of pixel intensities (histogram). a subsequent selection step ensures that only the most relevant features are used. statistical machine learning models are then fit to these data to identify potential imagingbased biomarkers. examples of these models include support vector machines and random forests. Deep learning algorithms recent advances in ai research have given rise to new, nondeterministic, deep learning algorithms that do not require explicit feature definition, representing a fundamentally different paradigm in machine learning. the underlying methods of deep learning have existed for decades. However, only in recent years have sufficient data and computational power become available. without explicit feature predefinition or selection, these algorithms learn directly by navigating the data space, giving them superior problemsolving capabilities. while various deep learning architectures have been explored to address different tasks, convolutional neural networks (CNNs) are the most prevalent deep learning architecture typologies in medical imaging today. a typical CNN comprises a series of layers that successively map image inputs to desired end points while learning increasingly higherlevel imaging features (Fig. 2b). starting from an input image, ‘hidden layers’ within CNNs usually include a series of convolution and pooling operations extracting feature maps and performing feature aggregation, respectively. these hidden layers are then followed by fully connected layers providing highlevel reasoning before an output layer produces predictions. CNNs are often trained endto-end with labelled data for supervised learning. Other architectures, such as deep autoencoders and generative adversarial networks, are more suited for unsupervised learning tasks on unlabelled data. transfer learning, or using pretrained networks on other data sets, is often utilized when dealing with scarce data. © 2018 Macmillan Publishers Limited, part of Springer Nature. All rights reserved. www.nature.com/nrc P e r s P e c t i v e s","PeriodicalId":493014,"journal":{"name":"Journal of the Mexican Federation of Radiology and Imaging","volume":"49 15","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Artificial intelligence in radiology\",\"authors\":\"D. Rubin\",\"doi\":\"10.24875/jmexfri.m24000073\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"| Artificial intelligence (AI) algorithms, particularly deep learning, have demonstrated remarkable progress in imagerecognition tasks. Methods ranging from convolutional neural networks to variational autoencoders have found myriad applications in the medical image analysis field, propelling it forward at a rapid pace. Historically , in radiology practice, trained physicians visually assessed medical images for the detection, characterization and monitoring of diseases. AI methods excel at automatically recognizing complex patterns in imaging data and providing quantitative, rather than qualitative, assessments of radiographic characteristics. In this Opinion article, we establish a general understanding of AI methods, particularly those pertaining to imagebased tasks. We explore how these methods could impact multiple facets of radiology , with a general focus on applications in oncology , and demonstrate ways in which these methods are advancing the field. Finally , we discuss the challenges facing clinical implementation and provide our perspective on how the domain could be advanced. PErsPECTIvEs © 2018 Macmillan Publishers Limited, part of Springer Nature. All rights reserved. Nature reviews | CanCer AI in medical imaging The primary driver behind the emergence of AI in medical imaging has been the desire for greater efficacy and efficiency in clinical care. Radiological imaging data continues to grow at a disproportionate rate when compared with the number of available trained readers, and the decline in imaging reimbursements has forced healthcare providers to compensate by increasing productivity24. These factors have contributed to a dramatic increase in radiologists’ workloads. Studies report that, in some cases, an average radiologist must interpret one image every 3–4 seconds in an 8-hour workday to meet workload demands25. As radiology involves visual perception as well as decision making under uncertainty26, errors are inevitable — especially under such constrained conditions. A seamlessly integrated AI component within the imaging workflow would increase efficiency, reduce errors and achieve objectives with minimal manual input by providing trained radiologists with prescreened images and identified features. Therefore, substantial efforts and policies are being put forward to facilitate technological advances related to AI in medical imaging. Almost all imagebased radiology tasks are contingent upon the quantification and assessment of radiographic characteristics from images. These characteristics can be important for the clinical task at hand, that is, for the detection, characterization or monitoring of diseases. The application of logic and statistical pattern recognition to problems in medicine has been proposed since the early 1960s27,28. As computers became more prevalent in the 1980s, the AIpowered automation of many clinical tasks has shifted radiology from a perceptual subjective craft to a quantitatively computable domain29,30. The rate at which AI is evolving radiology is parallel to that in other application areas and is proportional to the rapid growth of data and computational power. There are two classes of AI methods that are in wide use today (Box 1; Fig. 2). The first uses handcrafted engineered features that are defined in terms of mathematical equations (such as tumour texture) and can thus be quantified using computer programs31. These features are used as inputs to state-ofthe-art machine learning models that are trained to classify patients in ways that can support clinical decision making. Although such features are perceived to be discriminative, they rely on expert definition and hence do not necessarily represent the most optimal feature quantification approach for the discrimination task at hand. Moreover, predefined features are often unable to adapt to variations in imaging modalities, such as computed tomography (CT), positron emission tomography (PET) and magnetic resonance imaging (MRI), and their associated signalto-noise characteristics. The second method, deep learning, has gained considerable attention in recent years. Deep learning algorithms can automatically learn feature representations from data without the need for prior definition by human experts. This datadriven approach allows for more abstract feature definitions, making it more informative and generalizable. Deep learning can thus automatically quantify phenotypic characteristics of human tissues32, promising substantial improvements in diagnosis and clinical care. Deep learning has the added benefit of reducing the need for manual preprocessing steps. For example, to extract predefined features, accurate segmentation of diseased tissues by experts is often needed33. Because deep learning is data driven (Box 1), with enough example data, it can automatically identify diseased tissues and hence avoid the need for expertdefined segmentations. Given its ability to learn complex data representations, deep learning is also often robust against undesired variation, such as the interreader variability, and can hence be applied to a large variety of clinical conditions and parameters. In many ways, deep learning can mirror what trained radiologists do, that is, identify image parameters but also weigh up the importance of these parameters on the basis of other factors to arrive at a clinical decision. Given the growing number of applications of deep learning in medical imaging14, several efforts have compared deep learning methods with their predefined featurebased counterparts and have reported substantial performance improvements with deep learning34,35. Studies have also shown that deep learning technologies are on par with radiologists’ performance for both detection36 and segmentation37 tasks in ultrasonography and MRI, respectively. For the classification tasks of lymph node metastasis in PET–CT, deep learning had higher sensitivities but lower specificities than radiologists38. As these methods are iteratively refined and tailored for specific applications, a better command of the sensitivity:specificity tradeoff is expected. Deep learning can also enable faster development times, as it depends solely on curated data and the corresponding metadata rather than domain expertise. On the other hand, traditional predefined feature systems have shown plateauing performance over recent years and hence do not generally meet the stringent requirements for clinical utility. As a result, only a few have been translated into the clinic39. It is expected that Box 1 | artificial intelligence methods in medical imaging Machine learning algorithms based on predefined engineered features traditional artificial intelligence (ai) methods rely largely on predefined engineered feature algorithms (Fig. 2a) with explicit parameters based on expert knowledge. such features are designed to quantify specific radiographic characteristics, such as the 3D shape of a tumour or the intratumoural texture and distribution of pixel intensities (histogram). a subsequent selection step ensures that only the most relevant features are used. statistical machine learning models are then fit to these data to identify potential imagingbased biomarkers. examples of these models include support vector machines and random forests. Deep learning algorithms recent advances in ai research have given rise to new, nondeterministic, deep learning algorithms that do not require explicit feature definition, representing a fundamentally different paradigm in machine learning. the underlying methods of deep learning have existed for decades. However, only in recent years have sufficient data and computational power become available. without explicit feature predefinition or selection, these algorithms learn directly by navigating the data space, giving them superior problemsolving capabilities. while various deep learning architectures have been explored to address different tasks, convolutional neural networks (CNNs) are the most prevalent deep learning architecture typologies in medical imaging today. a typical CNN comprises a series of layers that successively map image inputs to desired end points while learning increasingly higherlevel imaging features (Fig. 2b). starting from an input image, ‘hidden layers’ within CNNs usually include a series of convolution and pooling operations extracting feature maps and performing feature aggregation, respectively. these hidden layers are then followed by fully connected layers providing highlevel reasoning before an output layer produces predictions. CNNs are often trained endto-end with labelled data for supervised learning. Other architectures, such as deep autoencoders and generative adversarial networks, are more suited for unsupervised learning tasks on unlabelled data. transfer learning, or using pretrained networks on other data sets, is often utilized when dealing with scarce data. © 2018 Macmillan Publishers Limited, part of Springer Nature. All rights reserved. www.nature.com/nrc P e r s P e c t i v e s\",\"PeriodicalId\":493014,\"journal\":{\"name\":\"Journal of the Mexican Federation of Radiology and Imaging\",\"volume\":\"49 15\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of the Mexican Federation of Radiology and Imaging\",\"FirstCategoryId\":\"0\",\"ListUrlMain\":\"https://doi.org/10.24875/jmexfri.m24000073\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of the Mexican Federation of Radiology and Imaging","FirstCategoryId":"0","ListUrlMain":"https://doi.org/10.24875/jmexfri.m24000073","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

鉴于深度学习具有学习复杂数据表示的能力,它通常还能抵御不希望出现的变异,如读片机之间的变异,因此可应用于各种临床条件和参数。在许多方面,深度学习可以反映受过训练的放射科医生的工作,即识别图像参数,并根据其他因素权衡这些参数的重要性,从而做出临床决策。鉴于深度学习在医学影像领域的应用越来越多14 ,有几项研究将深度学习方法与基于预定义特征的对应方法进行了比较,并报告了深度学习34,35 带来的性能大幅提升。研究还表明,在超声波成像和核磁共振成像中,深度学习技术分别在检测36 和分割37 任务上与放射科医生的表现不相上下。在 PET-CT 的淋巴结转移分类任务中,深度学习的灵敏度高于放射科医生,但特异性低于放射科医生38。随着这些方法的不断改进和针对特定应用的定制,有望更好地掌握灵敏度与特异性之间的权衡。深度学习还能缩短开发时间,因为它完全依赖于经过整理的数据和相应的元数据,而不是领域专业知识。另一方面,传统的预定义特征系统近年来表现平平,因此通常无法满足临床实用性的严格要求。因此,只有少数系统被应用于临床39。预计方框 1 医学成像中的人工智能方法 基于预定义工程特征的机器学习算法 传统的人工智能(ai)方法在很大程度上依赖于预定义工程特征算法(图 2a),该算法具有基于专家知识的明确参数。这些特征旨在量化特定的放射学特征,如肿瘤的三维形状或瘤内纹理和像素强度分布(直方图)。随后的选择步骤确保只使用最相关的特征。深度学习算法 最近的人工智能研究进展催生了新的非确定性深度学习算法,这种算法不需要明确的特征定义,代表了机器学习中一种根本不同的范式。这些算法不需要明确的特征预定义或选择,而是直接通过浏览数据空间来学习,因此具有卓越的问题解决能力。虽然已经探索了各种深度学习架构来解决不同的任务,但卷积神经网络(CNN)是当今医学影像领域最流行的深度学习架构类型。典型的 CNN 由一系列层组成,这些层将图像输入连续映射到所需的终点,同时学习越来越高级的成像特征(图 2b)。从输入图像开始,CNN 中的 "隐藏层 "通常包括一系列卷积和池化操作,分别提取特征图和执行特征聚合。CNN 通常使用标记数据进行端到端训练,以实现监督学习。其他架构,如深度自动编码器和生成式对抗网络,则更适合在无标签数据上执行无监督学习任务。© 2018 Macmillan Publishers Limited, part of Springer Nature.www.nature.com/nrc P e r s P e c t i v e s
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Artificial intelligence in radiology
| Artificial intelligence (AI) algorithms, particularly deep learning, have demonstrated remarkable progress in imagerecognition tasks. Methods ranging from convolutional neural networks to variational autoencoders have found myriad applications in the medical image analysis field, propelling it forward at a rapid pace. Historically , in radiology practice, trained physicians visually assessed medical images for the detection, characterization and monitoring of diseases. AI methods excel at automatically recognizing complex patterns in imaging data and providing quantitative, rather than qualitative, assessments of radiographic characteristics. In this Opinion article, we establish a general understanding of AI methods, particularly those pertaining to imagebased tasks. We explore how these methods could impact multiple facets of radiology , with a general focus on applications in oncology , and demonstrate ways in which these methods are advancing the field. Finally , we discuss the challenges facing clinical implementation and provide our perspective on how the domain could be advanced. PErsPECTIvEs © 2018 Macmillan Publishers Limited, part of Springer Nature. All rights reserved. Nature reviews | CanCer AI in medical imaging The primary driver behind the emergence of AI in medical imaging has been the desire for greater efficacy and efficiency in clinical care. Radiological imaging data continues to grow at a disproportionate rate when compared with the number of available trained readers, and the decline in imaging reimbursements has forced healthcare providers to compensate by increasing productivity24. These factors have contributed to a dramatic increase in radiologists’ workloads. Studies report that, in some cases, an average radiologist must interpret one image every 3–4 seconds in an 8-hour workday to meet workload demands25. As radiology involves visual perception as well as decision making under uncertainty26, errors are inevitable — especially under such constrained conditions. A seamlessly integrated AI component within the imaging workflow would increase efficiency, reduce errors and achieve objectives with minimal manual input by providing trained radiologists with prescreened images and identified features. Therefore, substantial efforts and policies are being put forward to facilitate technological advances related to AI in medical imaging. Almost all imagebased radiology tasks are contingent upon the quantification and assessment of radiographic characteristics from images. These characteristics can be important for the clinical task at hand, that is, for the detection, characterization or monitoring of diseases. The application of logic and statistical pattern recognition to problems in medicine has been proposed since the early 1960s27,28. As computers became more prevalent in the 1980s, the AIpowered automation of many clinical tasks has shifted radiology from a perceptual subjective craft to a quantitatively computable domain29,30. The rate at which AI is evolving radiology is parallel to that in other application areas and is proportional to the rapid growth of data and computational power. There are two classes of AI methods that are in wide use today (Box 1; Fig. 2). The first uses handcrafted engineered features that are defined in terms of mathematical equations (such as tumour texture) and can thus be quantified using computer programs31. These features are used as inputs to state-ofthe-art machine learning models that are trained to classify patients in ways that can support clinical decision making. Although such features are perceived to be discriminative, they rely on expert definition and hence do not necessarily represent the most optimal feature quantification approach for the discrimination task at hand. Moreover, predefined features are often unable to adapt to variations in imaging modalities, such as computed tomography (CT), positron emission tomography (PET) and magnetic resonance imaging (MRI), and their associated signalto-noise characteristics. The second method, deep learning, has gained considerable attention in recent years. Deep learning algorithms can automatically learn feature representations from data without the need for prior definition by human experts. This datadriven approach allows for more abstract feature definitions, making it more informative and generalizable. Deep learning can thus automatically quantify phenotypic characteristics of human tissues32, promising substantial improvements in diagnosis and clinical care. Deep learning has the added benefit of reducing the need for manual preprocessing steps. For example, to extract predefined features, accurate segmentation of diseased tissues by experts is often needed33. Because deep learning is data driven (Box 1), with enough example data, it can automatically identify diseased tissues and hence avoid the need for expertdefined segmentations. Given its ability to learn complex data representations, deep learning is also often robust against undesired variation, such as the interreader variability, and can hence be applied to a large variety of clinical conditions and parameters. In many ways, deep learning can mirror what trained radiologists do, that is, identify image parameters but also weigh up the importance of these parameters on the basis of other factors to arrive at a clinical decision. Given the growing number of applications of deep learning in medical imaging14, several efforts have compared deep learning methods with their predefined featurebased counterparts and have reported substantial performance improvements with deep learning34,35. Studies have also shown that deep learning technologies are on par with radiologists’ performance for both detection36 and segmentation37 tasks in ultrasonography and MRI, respectively. For the classification tasks of lymph node metastasis in PET–CT, deep learning had higher sensitivities but lower specificities than radiologists38. As these methods are iteratively refined and tailored for specific applications, a better command of the sensitivity:specificity tradeoff is expected. Deep learning can also enable faster development times, as it depends solely on curated data and the corresponding metadata rather than domain expertise. On the other hand, traditional predefined feature systems have shown plateauing performance over recent years and hence do not generally meet the stringent requirements for clinical utility. As a result, only a few have been translated into the clinic39. It is expected that Box 1 | artificial intelligence methods in medical imaging Machine learning algorithms based on predefined engineered features traditional artificial intelligence (ai) methods rely largely on predefined engineered feature algorithms (Fig. 2a) with explicit parameters based on expert knowledge. such features are designed to quantify specific radiographic characteristics, such as the 3D shape of a tumour or the intratumoural texture and distribution of pixel intensities (histogram). a subsequent selection step ensures that only the most relevant features are used. statistical machine learning models are then fit to these data to identify potential imagingbased biomarkers. examples of these models include support vector machines and random forests. Deep learning algorithms recent advances in ai research have given rise to new, nondeterministic, deep learning algorithms that do not require explicit feature definition, representing a fundamentally different paradigm in machine learning. the underlying methods of deep learning have existed for decades. However, only in recent years have sufficient data and computational power become available. without explicit feature predefinition or selection, these algorithms learn directly by navigating the data space, giving them superior problemsolving capabilities. while various deep learning architectures have been explored to address different tasks, convolutional neural networks (CNNs) are the most prevalent deep learning architecture typologies in medical imaging today. a typical CNN comprises a series of layers that successively map image inputs to desired end points while learning increasingly higherlevel imaging features (Fig. 2b). starting from an input image, ‘hidden layers’ within CNNs usually include a series of convolution and pooling operations extracting feature maps and performing feature aggregation, respectively. these hidden layers are then followed by fully connected layers providing highlevel reasoning before an output layer produces predictions. CNNs are often trained endto-end with labelled data for supervised learning. Other architectures, such as deep autoencoders and generative adversarial networks, are more suited for unsupervised learning tasks on unlabelled data. transfer learning, or using pretrained networks on other data sets, is often utilized when dealing with scarce data. © 2018 Macmillan Publishers Limited, part of Springer Nature. All rights reserved. www.nature.com/nrc P e r s P e c t i v e s
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信