{"title":"人工智能和机器学习在临床医学中的应用:未来会怎样?","authors":"Gerard Marshall Raj, Sathian Dananjayan, Kiran Kumar Gudivada","doi":"10.1002/med4.62","DOIUrl":null,"url":null,"abstract":"<p>The co-existence of artificial intelligence (AI) and human intelligence in solving complex medical issues is inevitable in the future. However, the question remains regarding whether this relationship would continue to be symbiotic and make room for better human-human interactions (the much-yearned patient-physician relationship) in clinical medicine [<span>1</span>].</p><p>The evolution of computational power, data science, and machine learning (ML) models is highly perceptible and sometimes unpredictable. Even Gordon Moore (cofounder of Intel®) had to revise his “Moore's law” from <i>‘the number of transistors on an integrated circuit would double every year’ (1960)</i> to <i>‘… every 2 years’</i> (1975) [<span>2</span>]. The same standard holds true for the application of AI in medicine, which ranges from diagnostics, therapeutics (personalized), prognostics, biomedical research (including clinical trials), public health (including pandemic preparedness), and administrative purposes [<span>3</span>]. Some of these possible current applications were barely foreseen and are largely unprecedented (Figure 1).</p><p>Through the aforementioned transitions, in addition to the scientific rigor and robustness of AI and ML concepts in medicine, the other considerations are ethical implications, regulatory standards, and legal challenges. Among these, the ethical intricacies surrounding AI in clinical applications are currently being considered upon more keenly and ethical guidelines are being fine-tuned across the world [<span>4-6</span>]. Instances of ethical issues include unfairly incentivizing people of the lower socio-economic strata to contribute personal data to AI development; the chances of cyberattacks on AI technologies, and the ensuing breach in data security and access to sensitive and private information; lack of transparency and explainability regarding how AI-based decisions and recommendations are derived (i.e., how the output is being derived from the input?—“black-box issue”); and overreliance on output from AI-driven technologies (“automation bias”) [<span>5, 7, 8</span>].</p><p>Overall, the principles of “transparency”, “justice, fairness, and equity”, “non-maleficence”, “responsibility and accountability”, and “privacy” were found to be common in global guidelines on ethical AI [<span>6, 9</span>].</p><p>Additionally, recently, the chatbots (like, ChatGPT and its successor GPT-4) have been the buzzwords in various health-related applications, from academic writing to clearing medical licensing exams, despite their inherent limitations and controversies [<span>10, 11</span>], including language bias [<span>12</span>], regional divide [<span>13</span>], environmental impact [<span>14</span>], and more importantly, compromise on publication ethics [<span>15</span>].</p><p>The medical profession is still based on the core principles of love, empathy, and compassion, but this may not always be replicated by ML-based healthcare tools and may sometimes be impossible [<span>16</span>]. Furthermore, the unwarranted forecasting of future health conditions may predispose the individual to heightened apprehension, psychological stress, and emotional distress, and consequent stigmatization [<span>5, 7</span>]. Hence, another dimension that is being explored is the addition of an emotional quotient to all AI applications, including chatbots [<span>17</span>].</p><p>Nevertheless, the science of AI shall continuously be honed for the betterment of human life—towards making them more humanizing and less perilous [<span>3</span>].</p><p><b>Gerard Marshall Raj</b>: Conceptualization (lead); investigation (lead); methodology (lead); project administration (lead); visualization (supporting); writing—original draft preparation (lead); writing—review and editing (lead). <b>Sathian Dananjayan:</b> Investigation (supporting); methodology (supporting); project administration (supporting); visualization (lead); writing—review and editing (supporting). <b>Kiran Kumar Gudivada</b>: Investigation (supporting); methodology (supporting); project administration (supporting); writing—review and editing (supporting).</p><p>The authors declare no conflicts of interest.</p><p>Not applicable.</p><p>Not applicable.</p>","PeriodicalId":100913,"journal":{"name":"Medicine Advances","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/med4.62","citationCount":"0","resultStr":"{\"title\":\"Applications of artificial intelligence and machine learning in clinical medicine: What lies ahead?\",\"authors\":\"Gerard Marshall Raj, Sathian Dananjayan, Kiran Kumar Gudivada\",\"doi\":\"10.1002/med4.62\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>The co-existence of artificial intelligence (AI) and human intelligence in solving complex medical issues is inevitable in the future. However, the question remains regarding whether this relationship would continue to be symbiotic and make room for better human-human interactions (the much-yearned patient-physician relationship) in clinical medicine [<span>1</span>].</p><p>The evolution of computational power, data science, and machine learning (ML) models is highly perceptible and sometimes unpredictable. Even Gordon Moore (cofounder of Intel®) had to revise his “Moore's law” from <i>‘the number of transistors on an integrated circuit would double every year’ (1960)</i> to <i>‘… every 2 years’</i> (1975) [<span>2</span>]. The same standard holds true for the application of AI in medicine, which ranges from diagnostics, therapeutics (personalized), prognostics, biomedical research (including clinical trials), public health (including pandemic preparedness), and administrative purposes [<span>3</span>]. Some of these possible current applications were barely foreseen and are largely unprecedented (Figure 1).</p><p>Through the aforementioned transitions, in addition to the scientific rigor and robustness of AI and ML concepts in medicine, the other considerations are ethical implications, regulatory standards, and legal challenges. Among these, the ethical intricacies surrounding AI in clinical applications are currently being considered upon more keenly and ethical guidelines are being fine-tuned across the world [<span>4-6</span>]. Instances of ethical issues include unfairly incentivizing people of the lower socio-economic strata to contribute personal data to AI development; the chances of cyberattacks on AI technologies, and the ensuing breach in data security and access to sensitive and private information; lack of transparency and explainability regarding how AI-based decisions and recommendations are derived (i.e., how the output is being derived from the input?—“black-box issue”); and overreliance on output from AI-driven technologies (“automation bias”) [<span>5, 7, 8</span>].</p><p>Overall, the principles of “transparency”, “justice, fairness, and equity”, “non-maleficence”, “responsibility and accountability”, and “privacy” were found to be common in global guidelines on ethical AI [<span>6, 9</span>].</p><p>Additionally, recently, the chatbots (like, ChatGPT and its successor GPT-4) have been the buzzwords in various health-related applications, from academic writing to clearing medical licensing exams, despite their inherent limitations and controversies [<span>10, 11</span>], including language bias [<span>12</span>], regional divide [<span>13</span>], environmental impact [<span>14</span>], and more importantly, compromise on publication ethics [<span>15</span>].</p><p>The medical profession is still based on the core principles of love, empathy, and compassion, but this may not always be replicated by ML-based healthcare tools and may sometimes be impossible [<span>16</span>]. Furthermore, the unwarranted forecasting of future health conditions may predispose the individual to heightened apprehension, psychological stress, and emotional distress, and consequent stigmatization [<span>5, 7</span>]. Hence, another dimension that is being explored is the addition of an emotional quotient to all AI applications, including chatbots [<span>17</span>].</p><p>Nevertheless, the science of AI shall continuously be honed for the betterment of human life—towards making them more humanizing and less perilous [<span>3</span>].</p><p><b>Gerard Marshall Raj</b>: Conceptualization (lead); investigation (lead); methodology (lead); project administration (lead); visualization (supporting); writing—original draft preparation (lead); writing—review and editing (lead). <b>Sathian Dananjayan:</b> Investigation (supporting); methodology (supporting); project administration (supporting); visualization (lead); writing—review and editing (supporting). <b>Kiran Kumar Gudivada</b>: Investigation (supporting); methodology (supporting); project administration (supporting); writing—review and editing (supporting).</p><p>The authors declare no conflicts of interest.</p><p>Not applicable.</p><p>Not applicable.</p>\",\"PeriodicalId\":100913,\"journal\":{\"name\":\"Medicine Advances\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-06-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/med4.62\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Medicine Advances\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/med4.62\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medicine Advances","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/med4.62","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Applications of artificial intelligence and machine learning in clinical medicine: What lies ahead?
The co-existence of artificial intelligence (AI) and human intelligence in solving complex medical issues is inevitable in the future. However, the question remains regarding whether this relationship would continue to be symbiotic and make room for better human-human interactions (the much-yearned patient-physician relationship) in clinical medicine [1].
The evolution of computational power, data science, and machine learning (ML) models is highly perceptible and sometimes unpredictable. Even Gordon Moore (cofounder of Intel®) had to revise his “Moore's law” from ‘the number of transistors on an integrated circuit would double every year’ (1960) to ‘… every 2 years’ (1975) [2]. The same standard holds true for the application of AI in medicine, which ranges from diagnostics, therapeutics (personalized), prognostics, biomedical research (including clinical trials), public health (including pandemic preparedness), and administrative purposes [3]. Some of these possible current applications were barely foreseen and are largely unprecedented (Figure 1).
Through the aforementioned transitions, in addition to the scientific rigor and robustness of AI and ML concepts in medicine, the other considerations are ethical implications, regulatory standards, and legal challenges. Among these, the ethical intricacies surrounding AI in clinical applications are currently being considered upon more keenly and ethical guidelines are being fine-tuned across the world [4-6]. Instances of ethical issues include unfairly incentivizing people of the lower socio-economic strata to contribute personal data to AI development; the chances of cyberattacks on AI technologies, and the ensuing breach in data security and access to sensitive and private information; lack of transparency and explainability regarding how AI-based decisions and recommendations are derived (i.e., how the output is being derived from the input?—“black-box issue”); and overreliance on output from AI-driven technologies (“automation bias”) [5, 7, 8].
Overall, the principles of “transparency”, “justice, fairness, and equity”, “non-maleficence”, “responsibility and accountability”, and “privacy” were found to be common in global guidelines on ethical AI [6, 9].
Additionally, recently, the chatbots (like, ChatGPT and its successor GPT-4) have been the buzzwords in various health-related applications, from academic writing to clearing medical licensing exams, despite their inherent limitations and controversies [10, 11], including language bias [12], regional divide [13], environmental impact [14], and more importantly, compromise on publication ethics [15].
The medical profession is still based on the core principles of love, empathy, and compassion, but this may not always be replicated by ML-based healthcare tools and may sometimes be impossible [16]. Furthermore, the unwarranted forecasting of future health conditions may predispose the individual to heightened apprehension, psychological stress, and emotional distress, and consequent stigmatization [5, 7]. Hence, another dimension that is being explored is the addition of an emotional quotient to all AI applications, including chatbots [17].
Nevertheless, the science of AI shall continuously be honed for the betterment of human life—towards making them more humanizing and less perilous [3].