人工智能和机器学习在临床医学中的应用:未来会怎样?

Medicine Advances Pub Date : 2024-06-05 DOI:10.1002/med4.62
Gerard Marshall Raj, Sathian Dananjayan, Kiran Kumar Gudivada
{"title":"人工智能和机器学习在临床医学中的应用:未来会怎样?","authors":"Gerard Marshall Raj,&nbsp;Sathian Dananjayan,&nbsp;Kiran Kumar Gudivada","doi":"10.1002/med4.62","DOIUrl":null,"url":null,"abstract":"<p>The co-existence of artificial intelligence (AI) and human intelligence in solving complex medical issues is inevitable in the future. However, the question remains regarding whether this relationship would continue to be symbiotic and make room for better human-human interactions (the much-yearned patient-physician relationship) in clinical medicine [<span>1</span>].</p><p>The evolution of computational power, data science, and machine learning (ML) models is highly perceptible and sometimes unpredictable. Even Gordon Moore (cofounder of Intel®) had to revise his “Moore's law” from <i>‘the number of transistors on an integrated circuit would double every year’ (1960)</i> to <i>‘… every 2 years’</i> (1975) [<span>2</span>]. The same standard holds true for the application of AI in medicine, which ranges from diagnostics, therapeutics (personalized), prognostics, biomedical research (including clinical trials), public health (including pandemic preparedness), and administrative purposes [<span>3</span>]. Some of these possible current applications were barely foreseen and are largely unprecedented (Figure 1).</p><p>Through the aforementioned transitions, in addition to the scientific rigor and robustness of AI and ML concepts in medicine, the other considerations are ethical implications, regulatory standards, and legal challenges. Among these, the ethical intricacies surrounding AI in clinical applications are currently being considered upon more keenly and ethical guidelines are being fine-tuned across the world [<span>4-6</span>]. Instances of ethical issues include unfairly incentivizing people of the lower socio-economic strata to contribute personal data to AI development; the chances of cyberattacks on AI technologies, and the ensuing breach in data security and access to sensitive and private information; lack of transparency and explainability regarding how AI-based decisions and recommendations are derived (i.e., how the output is being derived from the input?—“black-box issue”); and overreliance on output from AI-driven technologies (“automation bias”) [<span>5, 7, 8</span>].</p><p>Overall, the principles of “transparency”, “justice, fairness, and equity”, “non-maleficence”, “responsibility and accountability”, and “privacy” were found to be common in global guidelines on ethical AI [<span>6, 9</span>].</p><p>Additionally, recently, the chatbots (like, ChatGPT and its successor GPT-4) have been the buzzwords in various health-related applications, from academic writing to clearing medical licensing exams, despite their inherent limitations and controversies [<span>10, 11</span>], including language bias [<span>12</span>], regional divide [<span>13</span>], environmental impact [<span>14</span>], and more importantly, compromise on publication ethics [<span>15</span>].</p><p>The medical profession is still based on the core principles of love, empathy, and compassion, but this may not always be replicated by ML-based healthcare tools and may sometimes be impossible [<span>16</span>]. Furthermore, the unwarranted forecasting of future health conditions may predispose the individual to heightened apprehension, psychological stress, and emotional distress, and consequent stigmatization [<span>5, 7</span>]. Hence, another dimension that is being explored is the addition of an emotional quotient to all AI applications, including chatbots [<span>17</span>].</p><p>Nevertheless, the science of AI shall continuously be honed for the betterment of human life—towards making them more humanizing and less perilous [<span>3</span>].</p><p><b>Gerard Marshall Raj</b>: Conceptualization (lead); investigation (lead); methodology (lead); project administration (lead); visualization (supporting); writing—original draft preparation (lead); writing—review and editing (lead). <b>Sathian Dananjayan:</b> Investigation (supporting); methodology (supporting); project administration (supporting); visualization (lead); writing—review and editing (supporting). <b>Kiran Kumar Gudivada</b>: Investigation (supporting); methodology (supporting); project administration (supporting); writing—review and editing (supporting).</p><p>The authors declare no conflicts of interest.</p><p>Not applicable.</p><p>Not applicable.</p>","PeriodicalId":100913,"journal":{"name":"Medicine Advances","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/med4.62","citationCount":"0","resultStr":"{\"title\":\"Applications of artificial intelligence and machine learning in clinical medicine: What lies ahead?\",\"authors\":\"Gerard Marshall Raj,&nbsp;Sathian Dananjayan,&nbsp;Kiran Kumar Gudivada\",\"doi\":\"10.1002/med4.62\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>The co-existence of artificial intelligence (AI) and human intelligence in solving complex medical issues is inevitable in the future. However, the question remains regarding whether this relationship would continue to be symbiotic and make room for better human-human interactions (the much-yearned patient-physician relationship) in clinical medicine [<span>1</span>].</p><p>The evolution of computational power, data science, and machine learning (ML) models is highly perceptible and sometimes unpredictable. Even Gordon Moore (cofounder of Intel®) had to revise his “Moore's law” from <i>‘the number of transistors on an integrated circuit would double every year’ (1960)</i> to <i>‘… every 2 years’</i> (1975) [<span>2</span>]. The same standard holds true for the application of AI in medicine, which ranges from diagnostics, therapeutics (personalized), prognostics, biomedical research (including clinical trials), public health (including pandemic preparedness), and administrative purposes [<span>3</span>]. Some of these possible current applications were barely foreseen and are largely unprecedented (Figure 1).</p><p>Through the aforementioned transitions, in addition to the scientific rigor and robustness of AI and ML concepts in medicine, the other considerations are ethical implications, regulatory standards, and legal challenges. Among these, the ethical intricacies surrounding AI in clinical applications are currently being considered upon more keenly and ethical guidelines are being fine-tuned across the world [<span>4-6</span>]. Instances of ethical issues include unfairly incentivizing people of the lower socio-economic strata to contribute personal data to AI development; the chances of cyberattacks on AI technologies, and the ensuing breach in data security and access to sensitive and private information; lack of transparency and explainability regarding how AI-based decisions and recommendations are derived (i.e., how the output is being derived from the input?—“black-box issue”); and overreliance on output from AI-driven technologies (“automation bias”) [<span>5, 7, 8</span>].</p><p>Overall, the principles of “transparency”, “justice, fairness, and equity”, “non-maleficence”, “responsibility and accountability”, and “privacy” were found to be common in global guidelines on ethical AI [<span>6, 9</span>].</p><p>Additionally, recently, the chatbots (like, ChatGPT and its successor GPT-4) have been the buzzwords in various health-related applications, from academic writing to clearing medical licensing exams, despite their inherent limitations and controversies [<span>10, 11</span>], including language bias [<span>12</span>], regional divide [<span>13</span>], environmental impact [<span>14</span>], and more importantly, compromise on publication ethics [<span>15</span>].</p><p>The medical profession is still based on the core principles of love, empathy, and compassion, but this may not always be replicated by ML-based healthcare tools and may sometimes be impossible [<span>16</span>]. Furthermore, the unwarranted forecasting of future health conditions may predispose the individual to heightened apprehension, psychological stress, and emotional distress, and consequent stigmatization [<span>5, 7</span>]. Hence, another dimension that is being explored is the addition of an emotional quotient to all AI applications, including chatbots [<span>17</span>].</p><p>Nevertheless, the science of AI shall continuously be honed for the betterment of human life—towards making them more humanizing and less perilous [<span>3</span>].</p><p><b>Gerard Marshall Raj</b>: Conceptualization (lead); investigation (lead); methodology (lead); project administration (lead); visualization (supporting); writing—original draft preparation (lead); writing—review and editing (lead). <b>Sathian Dananjayan:</b> Investigation (supporting); methodology (supporting); project administration (supporting); visualization (lead); writing—review and editing (supporting). <b>Kiran Kumar Gudivada</b>: Investigation (supporting); methodology (supporting); project administration (supporting); writing—review and editing (supporting).</p><p>The authors declare no conflicts of interest.</p><p>Not applicable.</p><p>Not applicable.</p>\",\"PeriodicalId\":100913,\"journal\":{\"name\":\"Medicine Advances\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-06-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/med4.62\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Medicine Advances\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/med4.62\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medicine Advances","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/med4.62","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

人工智能(AI)与人类智能在解决复杂医疗问题方面的共存在未来是不可避免的。然而,问题仍然在于这种关系是否会继续共生,并为临床医学中更好的人与人之间的互动(患者与医生之间的关系)留出空间[1]。计算能力、数据科学和机器学习(ML)模型的发展是可感知的,有时甚至是不可预测的。就连戈登-摩尔(英特尔公司创始人之一)也不得不修改他的 "摩尔定律",从 "集成电路上的晶体管数量每年翻一番"(1960 年)改为"......每两年翻一番"(1975 年)[2]。同样的标准也适用于人工智能在医学中的应用,其应用范围包括诊断、治疗(个性化)、预后、生物医学研究(包括临床试验)、公共卫生(包括大流行病防备)和行政管理[3]。通过上述转变,除了人工智能和 ML 概念在医学中的科学严谨性和稳健性外,其他考虑因素还包括伦理影响、监管标准和法律挑战。其中,围绕人工智能在临床应用中的错综复杂的伦理问题目前正受到越来越多的关注,全球范围内的伦理准则也在不断微调[4-6]。伦理问题的实例包括:不公平地激励社会经济地位较低的人将个人数据贡献给人工智能开发;人工智能技术受到网络攻击的可能性,以及随之而来的数据安全漏洞和对敏感私人信息的访问;基于人工智能的决策和建议的得出方式缺乏透明度和可解释性(即:如何从数据中得出输出结果)、总体而言,"透明"、"公正、公平和平等"、"非恶意"、"责任和问责 "以及 "隐私 "等原则在全球人工智能伦理准则中十分常见[6, 9]。此外,最近,聊天机器人(如 ChatGPT 及其后续产品 GPT-4)已成为从学术写作到通过医学执照考试等各种健康相关应用中的热门词汇,尽管它们存在固有的局限性和争议[10, 11],包括语言偏见[12]、地区鸿沟[13]、环境影响[14],以及更重要的对出版伦理的妥协[15]。医疗行业仍以爱心、同理心和同情心为核心原则,但基于 ML 的医疗工具不一定能复制这一点,有时甚至是不可能的[16]。此外,对未来健康状况的无端预测可能会使个人更加忧心忡忡、承受心理压力和情绪困扰,并因此而蒙受耻辱[5, 7]。因此,正在探索的另一个维度是在包括聊天机器人在内的所有人工智能应用中添加情商[17]。尽管如此,人工智能科学仍将不断磨砺,以改善人类生活,使其更加人性化,减少危险[3]。Gerard Marshall Raj:构思(主导);调查(主导);方法论(主导);项目管理(主导);可视化(支持);写作-原稿准备(主导);写作-审阅和编辑(主导)。Sathian Dananjayan:调查(辅助);方法论(辅助);项目管理(辅助);可视化(牵头);写作-审阅和编辑(辅助)。基兰-库马尔-古迪瓦达(Kiran Kumar Gudivada):调查(支持);方法论(支持);项目管理(支持);撰写-审核和编辑(支持)。作者声明无利益冲突。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Applications of artificial intelligence and machine learning in clinical medicine: What lies ahead?

Applications of artificial intelligence and machine learning in clinical medicine: What lies ahead?

The co-existence of artificial intelligence (AI) and human intelligence in solving complex medical issues is inevitable in the future. However, the question remains regarding whether this relationship would continue to be symbiotic and make room for better human-human interactions (the much-yearned patient-physician relationship) in clinical medicine [1].

The evolution of computational power, data science, and machine learning (ML) models is highly perceptible and sometimes unpredictable. Even Gordon Moore (cofounder of Intel®) had to revise his “Moore's law” from ‘the number of transistors on an integrated circuit would double every year’ (1960) to ‘… every 2 years’ (1975) [2]. The same standard holds true for the application of AI in medicine, which ranges from diagnostics, therapeutics (personalized), prognostics, biomedical research (including clinical trials), public health (including pandemic preparedness), and administrative purposes [3]. Some of these possible current applications were barely foreseen and are largely unprecedented (Figure 1).

Through the aforementioned transitions, in addition to the scientific rigor and robustness of AI and ML concepts in medicine, the other considerations are ethical implications, regulatory standards, and legal challenges. Among these, the ethical intricacies surrounding AI in clinical applications are currently being considered upon more keenly and ethical guidelines are being fine-tuned across the world [4-6]. Instances of ethical issues include unfairly incentivizing people of the lower socio-economic strata to contribute personal data to AI development; the chances of cyberattacks on AI technologies, and the ensuing breach in data security and access to sensitive and private information; lack of transparency and explainability regarding how AI-based decisions and recommendations are derived (i.e., how the output is being derived from the input?—“black-box issue”); and overreliance on output from AI-driven technologies (“automation bias”) [5, 7, 8].

Overall, the principles of “transparency”, “justice, fairness, and equity”, “non-maleficence”, “responsibility and accountability”, and “privacy” were found to be common in global guidelines on ethical AI [6, 9].

Additionally, recently, the chatbots (like, ChatGPT and its successor GPT-4) have been the buzzwords in various health-related applications, from academic writing to clearing medical licensing exams, despite their inherent limitations and controversies [10, 11], including language bias [12], regional divide [13], environmental impact [14], and more importantly, compromise on publication ethics [15].

The medical profession is still based on the core principles of love, empathy, and compassion, but this may not always be replicated by ML-based healthcare tools and may sometimes be impossible [16]. Furthermore, the unwarranted forecasting of future health conditions may predispose the individual to heightened apprehension, psychological stress, and emotional distress, and consequent stigmatization [5, 7]. Hence, another dimension that is being explored is the addition of an emotional quotient to all AI applications, including chatbots [17].

Nevertheless, the science of AI shall continuously be honed for the betterment of human life—towards making them more humanizing and less perilous [3].

Gerard Marshall Raj: Conceptualization (lead); investigation (lead); methodology (lead); project administration (lead); visualization (supporting); writing—original draft preparation (lead); writing—review and editing (lead). Sathian Dananjayan: Investigation (supporting); methodology (supporting); project administration (supporting); visualization (lead); writing—review and editing (supporting). Kiran Kumar Gudivada: Investigation (supporting); methodology (supporting); project administration (supporting); writing—review and editing (supporting).

The authors declare no conflicts of interest.

Not applicable.

Not applicable.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信