D. Rubin
{"title":"放射学中的人工智能","authors":"D. Rubin","doi":"10.24875/jmexfri.m24000073","DOIUrl":null,"url":null,"abstract":"| Artificial intelligence (AI) algorithms, particularly deep learning, have demonstrated remarkable progress in imagerecognition tasks. Methods ranging from convolutional neural networks to variational autoencoders have found myriad applications in the medical image analysis field, propelling it forward at a rapid pace. Historically , in radiology practice, trained physicians visually assessed medical images for the detection, characterization and monitoring of diseases. AI methods excel at automatically recognizing complex patterns in imaging data and providing quantitative, rather than qualitative, assessments of radiographic characteristics. In this Opinion article, we establish a general understanding of AI methods, particularly those pertaining to imagebased tasks. We explore how these methods could impact multiple facets of radiology , with a general focus on applications in oncology , and demonstrate ways in which these methods are advancing the field. Finally , we discuss the challenges facing clinical implementation and provide our perspective on how the domain could be advanced. PErsPECTIvEs © 2018 Macmillan Publishers Limited, part of Springer Nature. All rights reserved. Nature reviews | CanCer AI in medical imaging The primary driver behind the emergence of AI in medical imaging has been the desire for greater efficacy and efficiency in clinical care. Radiological imaging data continues to grow at a disproportionate rate when compared with the number of available trained readers, and the decline in imaging reimbursements has forced healthcare providers to compensate by increasing productivity24. These factors have contributed to a dramatic increase in radiologists’ workloads. Studies report that, in some cases, an average radiologist must interpret one image every 3–4 seconds in an 8-hour workday to meet workload demands25. As radiology involves visual perception as well as decision making under uncertainty26, errors are inevitable — especially under such constrained conditions. A seamlessly integrated AI component within the imaging workflow would increase efficiency, reduce errors and achieve objectives with minimal manual input by providing trained radiologists with prescreened images and identified features. Therefore, substantial efforts and policies are being put forward to facilitate technological advances related to AI in medical imaging. Almost all imagebased radiology tasks are contingent upon the quantification and assessment of radiographic characteristics from images. These characteristics can be important for the clinical task at hand, that is, for the detection, characterization or monitoring of diseases. The application of logic and statistical pattern recognition to problems in medicine has been proposed since the early 1960s27,28. As computers became more prevalent in the 1980s, the AIpowered automation of many clinical tasks has shifted radiology from a perceptual subjective craft to a quantitatively computable domain29,30. The rate at which AI is evolving radiology is parallel to that in other application areas and is proportional to the rapid growth of data and computational power. There are two classes of AI methods that are in wide use today (Box 1; Fig. 2). The first uses handcrafted engineered features that are defined in terms of mathematical equations (such as tumour texture) and can thus be quantified using computer programs31. These features are used as inputs to state-ofthe-art machine learning models that are trained to classify patients in ways that can support clinical decision making. Although such features are perceived to be discriminative, they rely on expert definition and hence do not necessarily represent the most optimal feature quantification approach for the discrimination task at hand. Moreover, predefined features are often unable to adapt to variations in imaging modalities, such as computed tomography (CT), positron emission tomography (PET) and magnetic resonance imaging (MRI), and their associated signalto-noise characteristics. The second method, deep learning, has gained considerable attention in recent years. Deep learning algorithms can automatically learn feature representations from data without the need for prior definition by human experts. This datadriven approach allows for more abstract feature definitions, making it more informative and generalizable. Deep learning can thus automatically quantify phenotypic characteristics of human tissues32, promising substantial improvements in diagnosis and clinical care. Deep learning has the added benefit of reducing the need for manual preprocessing steps. For example, to extract predefined features, accurate segmentation of diseased tissues by experts is often needed33. Because deep learning is data driven (Box 1), with enough example data, it can automatically identify diseased tissues and hence avoid the need for expertdefined segmentations. Given its ability to learn complex data representations, deep learning is also often robust against undesired variation, such as the interreader variability, and can hence be applied to a large variety of clinical conditions and parameters. In many ways, deep learning can mirror what trained radiologists do, that is, identify image parameters but also weigh up the importance of these parameters on the basis of other factors to arrive at a clinical decision. Given the growing number of applications of deep learning in medical imaging14, several efforts have compared deep learning methods with their predefined featurebased counterparts and have reported substantial performance improvements with deep learning34,35. Studies have also shown that deep learning technologies are on par with radiologists’ performance for both detection36 and segmentation37 tasks in ultrasonography and MRI, respectively. For the classification tasks of lymph node metastasis in PET–CT, deep learning had higher sensitivities but lower specificities than radiologists38. As these methods are iteratively refined and tailored for specific applications, a better command of the sensitivity:specificity tradeoff is expected. Deep learning can also enable faster development times, as it depends solely on curated data and the corresponding metadata rather than domain expertise. On the other hand, traditional predefined feature systems have shown plateauing performance over recent years and hence do not generally meet the stringent requirements for clinical utility. As a result, only a few have been translated into the clinic39. It is expected that Box 1 | artificial intelligence methods in medical imaging Machine learning algorithms based on predefined engineered features traditional artificial intelligence (ai) methods rely largely on predefined engineered feature algorithms (Fig. 2a) with explicit parameters based on expert knowledge. such features are designed to quantify specific radiographic characteristics, such as the 3D shape of a tumour or the intratumoural texture and distribution of pixel intensities (histogram). a subsequent selection step ensures that only the most relevant features are used. statistical machine learning models are then fit to these data to identify potential imagingbased biomarkers. examples of these models include support vector machines and random forests. Deep learning algorithms recent advances in ai research have given rise to new, nondeterministic, deep learning algorithms that do not require explicit feature definition, representing a fundamentally different paradigm in machine learning. the underlying methods of deep learning have existed for decades. However, only in recent years have sufficient data and computational power become available. without explicit feature predefinition or selection, these algorithms learn directly by navigating the data space, giving them superior problemsolving capabilities. while various deep learning architectures have been explored to address different tasks, convolutional neural networks (CNNs) are the most prevalent deep learning architecture typologies in medical imaging today. a typical CNN comprises a series of layers that successively map image inputs to desired end points while learning increasingly higherlevel imaging features (Fig. 2b). starting from an input image, ‘hidden layers’ within CNNs usually include a series of convolution and pooling operations extracting feature maps and performing feature aggregation, respectively. these hidden layers are then followed by fully connected layers providing highlevel reasoning before an output layer produces predictions. CNNs are often trained endto-end with labelled data for supervised learning. Other architectures, such as deep autoencoders and generative adversarial networks, are more suited for unsupervised learning tasks on unlabelled data. transfer learning, or using pretrained networks on other data sets, is often utilized when dealing with scarce data. © 2018 Macmillan Publishers Limited, part of Springer Nature. All rights reserved. www.nature.com/nrc P e r s P e c t i v e s","PeriodicalId":493014,"journal":{"name":"Journal of the Mexican Federation of Radiology and Imaging","volume":"49 15","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Artificial intelligence in radiology\",\"authors\":\"D. Rubin\",\"doi\":\"10.24875/jmexfri.m24000073\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"| Artificial intelligence (AI) algorithms, particularly deep learning, have demonstrated remarkable progress in imagerecognition tasks. Methods ranging from convolutional neural networks to variational autoencoders have found myriad applications in the medical image analysis field, propelling it forward at a rapid pace. Historically , in radiology practice, trained physicians visually assessed medical images for the detection, characterization and monitoring of diseases. AI methods excel at automatically recognizing complex patterns in imaging data and providing quantitative, rather than qualitative, assessments of radiographic characteristics. In this Opinion article, we establish a general understanding of AI methods, particularly those pertaining to imagebased tasks. We explore how these methods could impact multiple facets of radiology , with a general focus on applications in oncology , and demonstrate ways in which these methods are advancing the field. Finally , we discuss the challenges facing clinical implementation and provide our perspective on how the domain could be advanced. PErsPECTIvEs © 2018 Macmillan Publishers Limited, part of Springer Nature. All rights reserved. Nature reviews | CanCer AI in medical imaging The primary driver behind the emergence of AI in medical imaging has been the desire for greater efficacy and efficiency in clinical care. Radiological imaging data continues to grow at a disproportionate rate when compared with the number of available trained readers, and the decline in imaging reimbursements has forced healthcare providers to compensate by increasing productivity24. These factors have contributed to a dramatic increase in radiologists’ workloads. Studies report that, in some cases, an average radiologist must interpret one image every 3–4 seconds in an 8-hour workday to meet workload demands25. As radiology involves visual perception as well as decision making under uncertainty26, errors are inevitable — especially under such constrained conditions. A seamlessly integrated AI component within the imaging workflow would increase efficiency, reduce errors and achieve objectives with minimal manual input by providing trained radiologists with prescreened images and identified features. Therefore, substantial efforts and policies are being put forward to facilitate technological advances related to AI in medical imaging. Almost all imagebased radiology tasks are contingent upon the quantification and assessment of radiographic characteristics from images. These characteristics can be important for the clinical task at hand, that is, for the detection, characterization or monitoring of diseases. The application of logic and statistical pattern recognition to problems in medicine has been proposed since the early 1960s27,28. As computers became more prevalent in the 1980s, the AIpowered automation of many clinical tasks has shifted radiology from a perceptual subjective craft to a quantitatively computable domain29,30. The rate at which AI is evolving radiology is parallel to that in other application areas and is proportional to the rapid growth of data and computational power. There are two classes of AI methods that are in wide use today (Box 1; Fig. 2). The first uses handcrafted engineered features that are defined in terms of mathematical equations (such as tumour texture) and can thus be quantified using computer programs31. These features are used as inputs to state-ofthe-art machine learning models that are trained to classify patients in ways that can support clinical decision making. Although such features are perceived to be discriminative, they rely on expert definition and hence do not necessarily represent the most optimal feature quantification approach for the discrimination task at hand. Moreover, predefined features are often unable to adapt to variations in imaging modalities, such as computed tomography (CT), positron emission tomography (PET) and magnetic resonance imaging (MRI), and their associated signalto-noise characteristics. The second method, deep learning, has gained considerable attention in recent years. Deep learning algorithms can automatically learn feature representations from data without the need for prior definition by human experts. This datadriven approach allows for more abstract feature definitions, making it more informative and generalizable. Deep learning can thus automatically quantify phenotypic characteristics of human tissues32, promising substantial improvements in diagnosis and clinical care. Deep learning has the added benefit of reducing the need for manual preprocessing steps. For example, to extract predefined features, accurate segmentation of diseased tissues by experts is often needed33. Because deep learning is data driven (Box 1), with enough example data, it can automatically identify diseased tissues and hence avoid the need for expertdefined segmentations. Given its ability to learn complex data representations, deep learning is also often robust against undesired variation, such as the interreader variability, and can hence be applied to a large variety of clinical conditions and parameters. In many ways, deep learning can mirror what trained radiologists do, that is, identify image parameters but also weigh up the importance of these parameters on the basis of other factors to arrive at a clinical decision. Given the growing number of applications of deep learning in medical imaging14, several efforts have compared deep learning methods with their predefined featurebased counterparts and have reported substantial performance improvements with deep learning34,35. Studies have also shown that deep learning technologies are on par with radiologists’ performance for both detection36 and segmentation37 tasks in ultrasonography and MRI, respectively. For the classification tasks of lymph node metastasis in PET–CT, deep learning had higher sensitivities but lower specificities than radiologists38. As these methods are iteratively refined and tailored for specific applications, a better command of the sensitivity:specificity tradeoff is expected. Deep learning can also enable faster development times, as it depends solely on curated data and the corresponding metadata rather than domain expertise. On the other hand, traditional predefined feature systems have shown plateauing performance over recent years and hence do not generally meet the stringent requirements for clinical utility. As a result, only a few have been translated into the clinic39. It is expected that Box 1 | artificial intelligence methods in medical imaging Machine learning algorithms based on predefined engineered features traditional artificial intelligence (ai) methods rely largely on predefined engineered feature algorithms (Fig. 2a) with explicit parameters based on expert knowledge. such features are designed to quantify specific radiographic characteristics, such as the 3D shape of a tumour or the intratumoural texture and distribution of pixel intensities (histogram). a subsequent selection step ensures that only the most relevant features are used. statistical machine learning models are then fit to these data to identify potential imagingbased biomarkers. examples of these models include support vector machines and random forests. Deep learning algorithms recent advances in ai research have given rise to new, nondeterministic, deep learning algorithms that do not require explicit feature definition, representing a fundamentally different paradigm in machine learning. the underlying methods of deep learning have existed for decades. However, only in recent years have sufficient data and computational power become available. without explicit feature predefinition or selection, these algorithms learn directly by navigating the data space, giving them superior problemsolving capabilities. while various deep learning architectures have been explored to address different tasks, convolutional neural networks (CNNs) are the most prevalent deep learning architecture typologies in medical imaging today. a typical CNN comprises a series of layers that successively map image inputs to desired end points while learning increasingly higherlevel imaging features (Fig. 2b). starting from an input image, ‘hidden layers’ within CNNs usually include a series of convolution and pooling operations extracting feature maps and performing feature aggregation, respectively. these hidden layers are then followed by fully connected layers providing highlevel reasoning before an output layer produces predictions. CNNs are often trained endto-end with labelled data for supervised learning. Other architectures, such as deep autoencoders and generative adversarial networks, are more suited for unsupervised learning tasks on unlabelled data. transfer learning, or using pretrained networks on other data sets, is often utilized when dealing with scarce data. © 2018 Macmillan Publishers Limited, part of Springer Nature. All rights reserved. www.nature.com/nrc P e r s P e c t i v e s\",\"PeriodicalId\":493014,\"journal\":{\"name\":\"Journal of the Mexican Federation of Radiology and Imaging\",\"volume\":\"49 15\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of the Mexican Federation of Radiology and Imaging\",\"FirstCategoryId\":\"0\",\"ListUrlMain\":\"https://doi.org/10.24875/jmexfri.m24000073\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of the Mexican Federation of Radiology and Imaging","FirstCategoryId":"0","ListUrlMain":"https://doi.org/10.24875/jmexfri.m24000073","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Artificial intelligence in radiology
| Artificial intelligence (AI) algorithms, particularly deep learning, have demonstrated remarkable progress in imagerecognition tasks. Methods ranging from convolutional neural networks to variational autoencoders have found myriad applications in the medical image analysis field, propelling it forward at a rapid pace. Historically , in radiology practice, trained physicians visually assessed medical images for the detection, characterization and monitoring of diseases. AI methods excel at automatically recognizing complex patterns in imaging data and providing quantitative, rather than qualitative, assessments of radiographic characteristics. In this Opinion article, we establish a general understanding of AI methods, particularly those pertaining to imagebased tasks. We explore how these methods could impact multiple facets of radiology , with a general focus on applications in oncology , and demonstrate ways in which these methods are advancing the field. Finally , we discuss the challenges facing clinical implementation and provide our perspective on how the domain could be advanced. PErsPECTIvEs © 2018 Macmillan Publishers Limited, part of Springer Nature. All rights reserved. Nature reviews | CanCer AI in medical imaging The primary driver behind the emergence of AI in medical imaging has been the desire for greater efficacy and efficiency in clinical care. Radiological imaging data continues to grow at a disproportionate rate when compared with the number of available trained readers, and the decline in imaging reimbursements has forced healthcare providers to compensate by increasing productivity24. These factors have contributed to a dramatic increase in radiologists’ workloads. Studies report that, in some cases, an average radiologist must interpret one image every 3–4 seconds in an 8-hour workday to meet workload demands25. As radiology involves visual perception as well as decision making under uncertainty26, errors are inevitable — especially under such constrained conditions. A seamlessly integrated AI component within the imaging workflow would increase efficiency, reduce errors and achieve objectives with minimal manual input by providing trained radiologists with prescreened images and identified features. Therefore, substantial efforts and policies are being put forward to facilitate technological advances related to AI in medical imaging. Almost all imagebased radiology tasks are contingent upon the quantification and assessment of radiographic characteristics from images. These characteristics can be important for the clinical task at hand, that is, for the detection, characterization or monitoring of diseases. The application of logic and statistical pattern recognition to problems in medicine has been proposed since the early 1960s27,28. As computers became more prevalent in the 1980s, the AIpowered automation of many clinical tasks has shifted radiology from a perceptual subjective craft to a quantitatively computable domain29,30. The rate at which AI is evolving radiology is parallel to that in other application areas and is proportional to the rapid growth of data and computational power. There are two classes of AI methods that are in wide use today (Box 1; Fig. 2). The first uses handcrafted engineered features that are defined in terms of mathematical equations (such as tumour texture) and can thus be quantified using computer programs31. These features are used as inputs to state-ofthe-art machine learning models that are trained to classify patients in ways that can support clinical decision making. Although such features are perceived to be discriminative, they rely on expert definition and hence do not necessarily represent the most optimal feature quantification approach for the discrimination task at hand. Moreover, predefined features are often unable to adapt to variations in imaging modalities, such as computed tomography (CT), positron emission tomography (PET) and magnetic resonance imaging (MRI), and their associated signalto-noise characteristics. The second method, deep learning, has gained considerable attention in recent years. Deep learning algorithms can automatically learn feature representations from data without the need for prior definition by human experts. This datadriven approach allows for more abstract feature definitions, making it more informative and generalizable. Deep learning can thus automatically quantify phenotypic characteristics of human tissues32, promising substantial improvements in diagnosis and clinical care. Deep learning has the added benefit of reducing the need for manual preprocessing steps. For example, to extract predefined features, accurate segmentation of diseased tissues by experts is often needed33. Because deep learning is data driven (Box 1), with enough example data, it can automatically identify diseased tissues and hence avoid the need for expertdefined segmentations. Given its ability to learn complex data representations, deep learning is also often robust against undesired variation, such as the interreader variability, and can hence be applied to a large variety of clinical conditions and parameters. In many ways, deep learning can mirror what trained radiologists do, that is, identify image parameters but also weigh up the importance of these parameters on the basis of other factors to arrive at a clinical decision. Given the growing number of applications of deep learning in medical imaging14, several efforts have compared deep learning methods with their predefined featurebased counterparts and have reported substantial performance improvements with deep learning34,35. Studies have also shown that deep learning technologies are on par with radiologists’ performance for both detection36 and segmentation37 tasks in ultrasonography and MRI, respectively. For the classification tasks of lymph node metastasis in PET–CT, deep learning had higher sensitivities but lower specificities than radiologists38. As these methods are iteratively refined and tailored for specific applications, a better command of the sensitivity:specificity tradeoff is expected. Deep learning can also enable faster development times, as it depends solely on curated data and the corresponding metadata rather than domain expertise. On the other hand, traditional predefined feature systems have shown plateauing performance over recent years and hence do not generally meet the stringent requirements for clinical utility. As a result, only a few have been translated into the clinic39. It is expected that Box 1 | artificial intelligence methods in medical imaging Machine learning algorithms based on predefined engineered features traditional artificial intelligence (ai) methods rely largely on predefined engineered feature algorithms (Fig. 2a) with explicit parameters based on expert knowledge. such features are designed to quantify specific radiographic characteristics, such as the 3D shape of a tumour or the intratumoural texture and distribution of pixel intensities (histogram). a subsequent selection step ensures that only the most relevant features are used. statistical machine learning models are then fit to these data to identify potential imagingbased biomarkers. examples of these models include support vector machines and random forests. Deep learning algorithms recent advances in ai research have given rise to new, nondeterministic, deep learning algorithms that do not require explicit feature definition, representing a fundamentally different paradigm in machine learning. the underlying methods of deep learning have existed for decades. However, only in recent years have sufficient data and computational power become available. without explicit feature predefinition or selection, these algorithms learn directly by navigating the data space, giving them superior problemsolving capabilities. while various deep learning architectures have been explored to address different tasks, convolutional neural networks (CNNs) are the most prevalent deep learning architecture typologies in medical imaging today. a typical CNN comprises a series of layers that successively map image inputs to desired end points while learning increasingly higherlevel imaging features (Fig. 2b). starting from an input image, ‘hidden layers’ within CNNs usually include a series of convolution and pooling operations extracting feature maps and performing feature aggregation, respectively. these hidden layers are then followed by fully connected layers providing highlevel reasoning before an output layer produces predictions. CNNs are often trained endto-end with labelled data for supervised learning. Other architectures, such as deep autoencoders and generative adversarial networks, are more suited for unsupervised learning tasks on unlabelled data. transfer learning, or using pretrained networks on other data sets, is often utilized when dealing with scarce data. © 2018 Macmillan Publishers Limited, part of Springer Nature. All rights reserved. www.nature.com/nrc P e r s P e c t i v e s