Patients and generative AI: Who owns your diagnosis?

IF 1.6 Q3 UROLOGY & NEPHROLOGY
BJUI compass Pub Date : 2024-10-24 DOI:10.1002/bco2.420
Asher Mandel, Michael DeMeo, Ashutosh Maheshwari, Ash Tewari
{"title":"Patients and generative AI: Who owns your diagnosis?","authors":"Asher Mandel,&nbsp;Michael DeMeo,&nbsp;Ashutosh Maheshwari,&nbsp;Ash Tewari","doi":"10.1002/bco2.420","DOIUrl":null,"url":null,"abstract":"<p>Generative artificial intelligence (AI) chatbots, like Open AI's ChatGPT, have revolutionized the way that humans interact with machines. With a recent market capitalization of $80 billion, investors strongly believe that AI has a future role in many industries. Mounting excitement, however, is also met by cautionary discourse regarding the need for ethical shepherding of AI's rollout. Several United States Congress hearings have centred around AI with the media abuzz with its consequences. Controversies yet to be settled include how to address the use of AI in academic publishing, education and medicine, among others.<span><sup>1-3</sup></span> An analysis of public perspectives on comfortability with AI in healthcare, drawn from social media content, found drastic heterogeneity.<span><sup>4</sup></span> Results from a recent Pew survey suggest that higher academic level and experience with AI increases the likelihood of having confidence in AI's ability to enhance medical care.<span><sup>5</sup></span> Nonetheless, natural language processing has already begun its infusion into the medical field with use cases including electrocardiogram interpretation and white blood cell count differentials.<span><sup>6</sup></span></p><p>Urology is no exception in this regard—embracing the benefits of AI by exploring the utility of agents (i.e., text/voice/video chatbots) and evaluating surgical skill.<span><sup>7, 8</sup></span> Some products have already resulted in the United States Food and Drug Administration approval, such as one that assists in localizing prostate tumour volume on magnetic resonance imaging and another that diagnoses prostate cancer on histopathology.<span><sup>9, 10</sup></span></p><p>As AI is increasingly adopted in everyday urology practice, to improve efficiency and quality of care, it is imperative that we consider the looming ethical ramifications proactively. A recent review presented by Dr. Hung et al. has illuminated some of these challenges, stirred conversation and presented possible policy-level solutions.<span><sup>11</sup></span> Nevertheless, urologists have still yet to address several other legal and ethical challenges looming in generative AI model development. This editorial seeks to expand the scope of conversation encompassing necessary considerations for adopting AI in urology.</p><p>Three important issues to consider include the agency of patients and their data, ownership over the models themselves and the potential competition these models may add in the marketplace. Healthcare institutions are charged with being ethical stewards of patient data. This paternal identity may engender a sense of entitlement over the data, and institutions may act as though they own patient data and use that argument in negotiations; however, these data cannot be legally copyrighted. Second, healthcare systems and AI companies are competing and collaborating in this emerging space. Both may be entitled to the products they develop because while a hospital brings the patient data to the table, an AI company brings the machine learning models. Should they be 50–50 partners? What is a fair split? Finally, doctors may be contributing to the development of tools that will automate components of their jobs, which may impact the demand for their services in the marketplace. How might we consider this factor in establishing fair partnerships?</p><p>The first issue to consider is whether the patients themselves should be represented as stakeholders in the AI commercialization negotiations. Patients often consent that their data may be used for research purposes, but no one would opt out of potentially realizing profits from their data being used to bring an AI tool to market. While no cases have been litigated directly on patient data used for AI model training, there are landmark bioethical cases in genetics and tissue banking with potentially informative parallels. In 2023, the descendants of Henrietta Lacks settled a lawsuit with Thermo Fisher Scientific over the profiting of her immortalized cell lines, which had been sold in a number of commercial capacities.<span><sup>12</sup></span> Regarding genetics, in Greenberg v. Miami Children's Hospital Research Institute, lawyers argued that donated tissue from patients led to the hospital's patenting of a genetic variant causing a rare disease called Canavan and commercialization of screening tests. The courts upheld the claim of unjust enrichment, stating that the patients should also share in the revenue from the royalties earned by the hospital.<span><sup>13</sup></span> In these cases of tissue banking and genetic variant identification, patients have been viewed as deserving of profit sharing.</p><p>As AI models are poised to champion the next revolution in healthcare products, it is important to consider the practicality of implementing a proactive system that addresses the implications of these precedents. Offering patients compensation for their data used in AI product commercialization is a progressive idea that respects patient rights. However, most of these products are developed from retrospective research initiatives using large patient databases that are often mandated to be de-identified by the Institutional Review Board (IRB). The process of re-identifying these patients may jeopardize patient privacy. Additionally, the administrative task of negotiating these contracts with individual patients is a large challenge that is not currently accounted for in research budgets.</p><p>The second issue essentially hinges on how to organize a fair and just economic framework to characterize the relationships between healthcare systems and for-profit companies who are collaborating to develop the AI models that may ultimately become commercialized clinical tools. The inputs of these models are patient data, which is stringently policed by the stipulations of the Health Insurance Portability and Accountability Act (HIPAA) and the IRB. [Correction added on 7 November 2024, after first online publication: The abbreviation, HIPPA has been corrected to HIPAA in the preceding sentence.] Companies cannot innovate without access to this data, but if the hospitals share the data they also might expect to share in future revenue. Although hospitals cannot copyright the data, they may leverage their positions as stewards of patient data to negotiate from the perspectives of pseudo-owners.<span><sup>14</sup></span> Nevertheless, negotiating these contracts can be exceedingly complex requiring significant legal and regulatory considerations that serve as stumbling blocks in the current status quo.</p><p>Let's look at a hypothetical example. A urology department wants to partner with a company to develop a tool that can augment the data gleaned from a transrectal ultrasound. They want a 360° sweep of the prostate to be recorded as a video and fed through a model that can tell the urologist the length of the prostatic urethra, the volume of the prostate, and whether there are any abnormal lesions that may be concerning for cancer. They approach the company and negotiate to reach a fair partnership and reimbursement sharing structure for the ultimate commercialization of this product. The practice has invested significant resources in the establishment and maintenance of clinical practice that has led to the creation of patient data. Therefore, they should be compensated commensurately for any tool that is built upon that foundation of data. On the other hand, the company that has developed an understanding of deep learning brings an algorithm to the table that can transform retrospective data into clinically meaningful predictions. Ultimately, the tool cannot be created without this marriage of data and algorithm.</p><p>Let's consider how the courts have approached similar cases in recent history. Thus far, the US courts have litigated fair use of data in training AI models, but only in the context of copyrighted data, which limits their applicability in healthcare where patient data cannot be copyrighted.<span><sup>14</sup></span> Nonetheless, the rulings are worth exploring to shed light on the perspective of the courts in navigating these unchartered waters.</p><p>The <i>New York Times</i> is suing OpenAI for compensation since OpenAI trained its model on millions of the periodical's copyrighted articles.<span><sup>15</sup></span> This lawsuit is ongoing, and many believe that the supreme court will ultimately hear this case, as similar filings are anticipated, and a generalizable ruling will be needed to guide subsequent adjudications. The core principle is called ‘fair use’, which embodies a legal defence that can permit the use of copyrighted material under certain circumstances.</p><p>Another intellectual property consideration is referred to as infringing derivative work. Namely, is the output of the model so similar to the input, that it does not actually add new value, but is merely copying? Investigating analogous cases litigated in media, academia, music and other industries, mixed rulings have been issued by the courts. Examples include Kadrey v. Meta Platforms Inc., where the plaintiff was a publishing company, and Thomson Reuters v. Ross Intelligence Inc., where the plaintiff was a legal archive. Essentially the courts have ruled that the outputs of the models are sufficiently different so as not to constitute infringing derivative work.<span><sup>16, 17</sup></span> What the courts clarified is that using copyrighted material without permission as model inputs is not permitted, but the outputs are probably sufficiently unique to constitute being its own entity and to add new value to society.</p><p>The third issue that must be addressed is the potential disruption these technologies may induce in the demand for professional services. A pathologist trains for many years to make histopathological interpretations. The pathologist's interpretations are required as inputs for the AI model's training. If the AI model can then be used instead of the pathologist, the pathologist will lose out on potential earnings. This potential economic loss evokes hesitancy in collaboration. Therefore, maintaining ownership within the medical profession and ensuring financial benefit would possibly alleviate this hesitancy and open the floodgates of willing collaboration. Even still, this transition will be challenging given the potential resistance to change and the need for new skills training. Alternatively, adjusting the role of some physicians to act as supervisors over the AI models may be another solution. Nonetheless, automation is a wider economic phenomenon in need of address.</p><p>A final thought—all stakeholders here deserve consideration. Successful collaborations will depend on this understanding to maintain compliance and to foster investment. There would be no Napster without musicians recording music, no YouTube without content creators. There can be no chest radiography AI assistant without patients having pneumonia and radiologists writing their impressions. It is in all our interests that these tools be trained on as much data as possible to ensure they are robust and generalizable across communities. Let's take a pause and realize that we have all the ingredients to create these tools. If we can proceed with a spirit of generosity and be mindful of everyone's contributions, there will be no stopping the great promise of clinical revolution made possible by generative AI.</p><p>Dr. Tewari discloses holding non-financial leadership positions in The Kalyani Prostate Cancer Institute, The Global Prostate Cancer Foundation, Roivant, PathomIQ, and Intuitive Surgical. He has served as a site-PI on pharma/industry sponsored clinical trials from Kite Pharma Inc., Lumicell Inx., Dendron Pharmaceuticals, LLC, Oncovir Inc., Blue Earth Diagnostics Ltd., RhoVac ApS., Bayer HealthCare Pharmaceuticals Inc., Janssen Research and Development, LLC. Dr. Tewari has served as an unpaid consultant to Roivant Biosciences and advisor to Promaxo. He owns equity in Promaxo.</p><p>Asher Mandel has nothing to disclose. Michael DeMeo has nothing to disclose. Ashutosh Maheshwari has nothing to disclose.</p>","PeriodicalId":72420,"journal":{"name":"BJUI compass","volume":"5 12","pages":"1246-1248"},"PeriodicalIF":1.6000,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11685166/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"BJUI compass","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/bco2.420","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"UROLOGY & NEPHROLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Generative artificial intelligence (AI) chatbots, like Open AI's ChatGPT, have revolutionized the way that humans interact with machines. With a recent market capitalization of $80 billion, investors strongly believe that AI has a future role in many industries. Mounting excitement, however, is also met by cautionary discourse regarding the need for ethical shepherding of AI's rollout. Several United States Congress hearings have centred around AI with the media abuzz with its consequences. Controversies yet to be settled include how to address the use of AI in academic publishing, education and medicine, among others.1-3 An analysis of public perspectives on comfortability with AI in healthcare, drawn from social media content, found drastic heterogeneity.4 Results from a recent Pew survey suggest that higher academic level and experience with AI increases the likelihood of having confidence in AI's ability to enhance medical care.5 Nonetheless, natural language processing has already begun its infusion into the medical field with use cases including electrocardiogram interpretation and white blood cell count differentials.6

Urology is no exception in this regard—embracing the benefits of AI by exploring the utility of agents (i.e., text/voice/video chatbots) and evaluating surgical skill.7, 8 Some products have already resulted in the United States Food and Drug Administration approval, such as one that assists in localizing prostate tumour volume on magnetic resonance imaging and another that diagnoses prostate cancer on histopathology.9, 10

As AI is increasingly adopted in everyday urology practice, to improve efficiency and quality of care, it is imperative that we consider the looming ethical ramifications proactively. A recent review presented by Dr. Hung et al. has illuminated some of these challenges, stirred conversation and presented possible policy-level solutions.11 Nevertheless, urologists have still yet to address several other legal and ethical challenges looming in generative AI model development. This editorial seeks to expand the scope of conversation encompassing necessary considerations for adopting AI in urology.

Three important issues to consider include the agency of patients and their data, ownership over the models themselves and the potential competition these models may add in the marketplace. Healthcare institutions are charged with being ethical stewards of patient data. This paternal identity may engender a sense of entitlement over the data, and institutions may act as though they own patient data and use that argument in negotiations; however, these data cannot be legally copyrighted. Second, healthcare systems and AI companies are competing and collaborating in this emerging space. Both may be entitled to the products they develop because while a hospital brings the patient data to the table, an AI company brings the machine learning models. Should they be 50–50 partners? What is a fair split? Finally, doctors may be contributing to the development of tools that will automate components of their jobs, which may impact the demand for their services in the marketplace. How might we consider this factor in establishing fair partnerships?

The first issue to consider is whether the patients themselves should be represented as stakeholders in the AI commercialization negotiations. Patients often consent that their data may be used for research purposes, but no one would opt out of potentially realizing profits from their data being used to bring an AI tool to market. While no cases have been litigated directly on patient data used for AI model training, there are landmark bioethical cases in genetics and tissue banking with potentially informative parallels. In 2023, the descendants of Henrietta Lacks settled a lawsuit with Thermo Fisher Scientific over the profiting of her immortalized cell lines, which had been sold in a number of commercial capacities.12 Regarding genetics, in Greenberg v. Miami Children's Hospital Research Institute, lawyers argued that donated tissue from patients led to the hospital's patenting of a genetic variant causing a rare disease called Canavan and commercialization of screening tests. The courts upheld the claim of unjust enrichment, stating that the patients should also share in the revenue from the royalties earned by the hospital.13 In these cases of tissue banking and genetic variant identification, patients have been viewed as deserving of profit sharing.

As AI models are poised to champion the next revolution in healthcare products, it is important to consider the practicality of implementing a proactive system that addresses the implications of these precedents. Offering patients compensation for their data used in AI product commercialization is a progressive idea that respects patient rights. However, most of these products are developed from retrospective research initiatives using large patient databases that are often mandated to be de-identified by the Institutional Review Board (IRB). The process of re-identifying these patients may jeopardize patient privacy. Additionally, the administrative task of negotiating these contracts with individual patients is a large challenge that is not currently accounted for in research budgets.

The second issue essentially hinges on how to organize a fair and just economic framework to characterize the relationships between healthcare systems and for-profit companies who are collaborating to develop the AI models that may ultimately become commercialized clinical tools. The inputs of these models are patient data, which is stringently policed by the stipulations of the Health Insurance Portability and Accountability Act (HIPAA) and the IRB. [Correction added on 7 November 2024, after first online publication: The abbreviation, HIPPA has been corrected to HIPAA in the preceding sentence.] Companies cannot innovate without access to this data, but if the hospitals share the data they also might expect to share in future revenue. Although hospitals cannot copyright the data, they may leverage their positions as stewards of patient data to negotiate from the perspectives of pseudo-owners.14 Nevertheless, negotiating these contracts can be exceedingly complex requiring significant legal and regulatory considerations that serve as stumbling blocks in the current status quo.

Let's look at a hypothetical example. A urology department wants to partner with a company to develop a tool that can augment the data gleaned from a transrectal ultrasound. They want a 360° sweep of the prostate to be recorded as a video and fed through a model that can tell the urologist the length of the prostatic urethra, the volume of the prostate, and whether there are any abnormal lesions that may be concerning for cancer. They approach the company and negotiate to reach a fair partnership and reimbursement sharing structure for the ultimate commercialization of this product. The practice has invested significant resources in the establishment and maintenance of clinical practice that has led to the creation of patient data. Therefore, they should be compensated commensurately for any tool that is built upon that foundation of data. On the other hand, the company that has developed an understanding of deep learning brings an algorithm to the table that can transform retrospective data into clinically meaningful predictions. Ultimately, the tool cannot be created without this marriage of data and algorithm.

Let's consider how the courts have approached similar cases in recent history. Thus far, the US courts have litigated fair use of data in training AI models, but only in the context of copyrighted data, which limits their applicability in healthcare where patient data cannot be copyrighted.14 Nonetheless, the rulings are worth exploring to shed light on the perspective of the courts in navigating these unchartered waters.

The New York Times is suing OpenAI for compensation since OpenAI trained its model on millions of the periodical's copyrighted articles.15 This lawsuit is ongoing, and many believe that the supreme court will ultimately hear this case, as similar filings are anticipated, and a generalizable ruling will be needed to guide subsequent adjudications. The core principle is called ‘fair use’, which embodies a legal defence that can permit the use of copyrighted material under certain circumstances.

Another intellectual property consideration is referred to as infringing derivative work. Namely, is the output of the model so similar to the input, that it does not actually add new value, but is merely copying? Investigating analogous cases litigated in media, academia, music and other industries, mixed rulings have been issued by the courts. Examples include Kadrey v. Meta Platforms Inc., where the plaintiff was a publishing company, and Thomson Reuters v. Ross Intelligence Inc., where the plaintiff was a legal archive. Essentially the courts have ruled that the outputs of the models are sufficiently different so as not to constitute infringing derivative work.16, 17 What the courts clarified is that using copyrighted material without permission as model inputs is not permitted, but the outputs are probably sufficiently unique to constitute being its own entity and to add new value to society.

The third issue that must be addressed is the potential disruption these technologies may induce in the demand for professional services. A pathologist trains for many years to make histopathological interpretations. The pathologist's interpretations are required as inputs for the AI model's training. If the AI model can then be used instead of the pathologist, the pathologist will lose out on potential earnings. This potential economic loss evokes hesitancy in collaboration. Therefore, maintaining ownership within the medical profession and ensuring financial benefit would possibly alleviate this hesitancy and open the floodgates of willing collaboration. Even still, this transition will be challenging given the potential resistance to change and the need for new skills training. Alternatively, adjusting the role of some physicians to act as supervisors over the AI models may be another solution. Nonetheless, automation is a wider economic phenomenon in need of address.

A final thought—all stakeholders here deserve consideration. Successful collaborations will depend on this understanding to maintain compliance and to foster investment. There would be no Napster without musicians recording music, no YouTube without content creators. There can be no chest radiography AI assistant without patients having pneumonia and radiologists writing their impressions. It is in all our interests that these tools be trained on as much data as possible to ensure they are robust and generalizable across communities. Let's take a pause and realize that we have all the ingredients to create these tools. If we can proceed with a spirit of generosity and be mindful of everyone's contributions, there will be no stopping the great promise of clinical revolution made possible by generative AI.

Dr. Tewari discloses holding non-financial leadership positions in The Kalyani Prostate Cancer Institute, The Global Prostate Cancer Foundation, Roivant, PathomIQ, and Intuitive Surgical. He has served as a site-PI on pharma/industry sponsored clinical trials from Kite Pharma Inc., Lumicell Inx., Dendron Pharmaceuticals, LLC, Oncovir Inc., Blue Earth Diagnostics Ltd., RhoVac ApS., Bayer HealthCare Pharmaceuticals Inc., Janssen Research and Development, LLC. Dr. Tewari has served as an unpaid consultant to Roivant Biosciences and advisor to Promaxo. He owns equity in Promaxo.

Asher Mandel has nothing to disclose. Michael DeMeo has nothing to disclose. Ashutosh Maheshwari has nothing to disclose.

病人和生成人工智能:谁拥有你的诊断?
生成式人工智能(AI)聊天机器人,如Open AI的ChatGPT,已经彻底改变了人类与机器互动的方式。最近,人工智能的市值达到了800亿美元,投资者坚信,人工智能未来将在许多行业发挥作用。然而,在日益高涨的兴奋情绪中,也出现了关于人工智能的推出需要道德指导的告诫性言论。美国国会的几次听证会都围绕人工智能展开,媒体对其后果议论纷纷。尚未解决的争议包括如何解决人工智能在学术出版、教育和医学等领域的使用问题。从社交媒体内容中提取的公众对医疗保健中人工智能的舒适度的看法的分析发现了巨大的异质性皮尤最近的一项调查结果表明,较高的学术水平和人工智能经验会增加对人工智能增强医疗保健能力的信心尽管如此,自然语言处理已经开始进入医疗领域,用例包括心电图解释和白细胞计数区分。泌尿外科在这方面也不例外——通过探索代理(即文本/语音/视频聊天机器人)的效用和评估手术技能,拥抱人工智能的好处。7,8一些产品已经获得了美国食品和药物管理局的批准,比如一种通过磁共振成像帮助定位前列腺肿瘤体积的产品,以及另一种通过组织病理学诊断前列腺癌的产品。9,10随着人工智能越来越多地应用于日常泌尿外科实践,为了提高效率和护理质量,我们必须积极考虑迫在眉睫的伦理后果。洪博士等人最近发表的一篇综述阐明了其中的一些挑战,引发了讨论,并提出了可能的政策层面的解决方案然而,泌尿科医生仍然需要解决在生成式人工智能模型开发中出现的其他几个法律和伦理挑战。这篇社论旨在扩大对话的范围,包括在泌尿外科中采用人工智能的必要考虑因素。需要考虑的三个重要问题包括患者及其数据的代理、模型本身的所有权以及这些模型可能在市场上增加的潜在竞争。医疗机构有责任成为患者数据的道德管理者。这种父亲身份可能会产生一种对数据的权利感,机构可能会表现得好像他们拥有患者数据,并在谈判中使用这种论点;然而,这些数据不具有法律版权。其次,医疗系统和人工智能公司在这个新兴领域既竞争又合作。这两家公司可能都有资格拥有自己开发的产品,因为医院提供的是患者数据,而人工智能公司提供的是机器学习模型。他们应该是五五开的合伙人吗?什么是公平分配?最后,医生可能会为自动化工具的开发做出贡献,这可能会影响市场对他们服务的需求。在建立公平的伙伴关系时,我们如何考虑这一因素?首先要考虑的问题是,在人工智能商业化谈判中,患者本身是否应该作为利益相关者代表。患者通常同意将他们的数据用于研究目的,但没有人会选择放弃将他们的数据用于将人工智能工具推向市场的潜在利润。虽然目前还没有直接针对用于人工智能模型训练的患者数据提起诉讼的案例,但遗传学和组织库领域存在具有里程碑意义的生物伦理案例,可能具有丰富的信息。2023年,亨丽埃塔·拉克斯(Henrietta Lacks)的后代与赛默飞世尔科学公司(Thermo Fisher Scientific)就她的长生不老细胞系的利润达成了和解,这些细胞系已以多种商业能力出售关于遗传学,在格林伯格诉迈阿密儿童医院研究所(Greenberg v. Miami Children’s Hospital Research Institute)一案中,律师辩称,患者捐献的组织导致该医院为一种导致一种名为Canavan的罕见疾病的基因变异申请了专利,并将筛查测试商业化。法院支持不正当得利的主张,指出病人也应分享医院获得的特许权使用费收入在这些组织库和基因变异鉴定的案例中,患者被视为值得分享利润。随着人工智能模型准备好支持医疗保健产品的下一次革命,重要的是要考虑实施一个主动系统的实用性,以解决这些先例的影响。对人工智能产品商用化中使用的患者数据进行补偿,是尊重患者权利的进步理念。 然而,这些产品大多是从回顾性研究计划中开发出来的,这些研究计划使用的是大型患者数据库,这些数据库通常被机构审查委员会(IRB)授权去识别。重新识别这些患者的过程可能会危及患者的隐私。此外,与个别患者谈判这些合同的行政任务是一个巨大的挑战,目前没有在研究预算中考虑。第二个问题本质上取决于如何组织一个公平和公正的经济框架,以表征医疗系统和营利性公司之间的关系,这些公司正在合作开发最终可能成为商业化临床工具的人工智能模型。这些模型的输入是患者数据,这些数据由《健康保险流通与责任法案》(HIPAA)和IRB的规定严格监管。[首次在线发布后,于2024年11月7日补充更正:前一句中的缩写HIPPA已更正为HIPAA。]如果没有这些数据,公司就无法创新,但如果医院分享这些数据,他们也可能期望分享未来的收入。虽然医院不能获得数据的版权,但他们可以利用自己作为患者数据管理者的地位,从伪所有者的角度进行谈判然而,谈判这些合同可能非常复杂,需要考虑重要的法律和监管因素,这些因素是当前现状的绊脚石。让我们来看一个假设的例子。泌尿科希望与一家公司合作开发一种工具,可以增强从经直肠超声收集的数据。他们希望将前列腺的360度扫描记录下来,并通过一个模型输入,这个模型可以告诉泌尿科医生前列腺尿道的长度、前列腺的体积,以及是否有可能导致癌症的异常病变。他们接近该公司并进行谈判,以达成公平的合作伙伴关系和报销分担结构,以最终实现该产品的商业化。该实践在建立和维护临床实践方面投入了大量资源,这导致了患者数据的创建。因此,对于建立在该数据基础上的任何工具,他们都应该得到相应的补偿。另一方面,对深度学习有深入了解的公司提出了一种算法,可以将回顾性数据转化为有临床意义的预测。最终,如果没有数据和算法的结合,就无法创建该工具。让我们来看看法院是如何处理近代史上的类似案件的。到目前为止,美国法院已经对训练人工智能模型中数据的合理使用提起诉讼,但仅限于有版权的数据,这限制了它们在医疗保健领域的适用性,因为患者数据无法获得版权尽管如此,这些裁决还是值得探讨,以揭示法院在这些未知水域中航行的视角。《纽约时报》起诉OpenAI,要求其赔偿,因为OpenAI对该期刊数百万篇受版权保护的文章进行了模型训练这起诉讼仍在进行中,许多人认为最高法院最终会审理此案,因为预计会有类似的诉讼,需要一个可概括的裁决来指导后续的裁决。核心原则被称为“合理使用”,它体现了一种法律辩护,允许在某些情况下使用受版权保护的材料。另一种知识产权考虑被称为侵权衍生作品。也就是说,模型的输出是否与输入如此相似,以至于它实际上并没有增加新的价值,而仅仅是复制?在调查媒体、学术界、音乐界和其他行业的类似诉讼案件时,法院做出了不同的裁决。例如,原告是一家出版公司的Kadrey诉Meta平台公司案,以及原告是一家法律档案馆的汤森路透诉罗斯情报公司案。从本质上讲,法院已经裁定,模型的输出有足够的不同,因此不构成侵权的衍生作品。16,17法院澄清的是,未经许可使用受版权保护的材料作为示范投入是不允许的,但产出可能是足够独特的,可以构成自己的实体,并为社会增加新的价值。第三个必须解决的问题是,这些技术可能导致对专业服务需求的潜在破坏。病理学家要经过多年的训练才能做出组织病理学的解释。病理学家的解释需要作为人工智能模型训练的输入。如果人工智能模型可以代替病理学家,病理学家将失去潜在的收入。这种潜在的经济损失引起了合作的犹豫。 因此,在医疗行业内保持所有权并确保经济利益可能会减轻这种犹豫,并打开愿意合作的闸门。即便如此,考虑到变革的潜在阻力和对新技能培训的需求,这种转变将是具有挑战性的。或者,调整一些医生的角色,让他们充当人工智能模型的监督者,可能是另一种解决方案。尽管如此,自动化是一个需要解决的更广泛的经济现象。最后一个想法——这里的所有利益相关者都值得考虑。成功的合作将取决于这种理解,以保持合规性和促进投资。没有音乐人录制音乐就没有Napster,没有内容创作者就没有YouTube。如果没有肺炎患者和放射科医生记录他们的印象,就没有胸部x线摄影人工智能助手。对这些工具进行尽可能多的数据培训,以确保它们在社区中健壮和可推广,这符合我们所有人的利益。让我们暂停一下,意识到我们拥有创造这些工具的所有要素。如果我们能够以一种慷慨的精神继续前进,并注意到每个人的贡献,那么将无法阻止由生成人工智能实现的临床革命的伟大承诺。Tewari在Kalyani前列腺癌研究所、全球前列腺癌基金会、Roivant、PathomIQ和Intuitive Surgical担任非财务领导职务。他曾担任Kite pharma Inc.、Lumicell Inx的制药/行业赞助临床试验的现场pi。, Dendron Pharmaceuticals, LLC, Oncovir Inc., Blue Earth Diagnostics Ltd., RhoVac ApS。Tewari博士曾担任Roivant Biosciences的无薪顾问和Promaxo的顾问。他拥有Promaxo的股权。亚瑟·曼德尔没有什么可透露的。Michael DeMeo没有什么要透露的。Ashutosh Maheshwari没有什么可透露的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
2.30
自引率
0.00%
发文量
0
审稿时长
12 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信