Toward a framework for risk mitigation of potential misuse of artificial intelligence in biomedical research

IF 18.8 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Artem A. Trotsyuk, Quinn Waeiss, Raina Talwar Bhatia, Brandon J. Aponte, Isabella M. L. Heffernan, Devika Madgavkar, Ryan Marshall Felder, Lisa Soleymani Lehmann, Megan J. Palmer, Hank Greely, Russell Wald, Lea Goetz, Markus Trengove, Robert Vandersluis, Herbert Lin, Mildred K. Cho, Russ B. Altman, Drew Endy, David A. Relman, Margaret Levi, Debra Satz, David Magnus
{"title":"Toward a framework for risk mitigation of potential misuse of artificial intelligence in biomedical research","authors":"Artem A. Trotsyuk, Quinn Waeiss, Raina Talwar Bhatia, Brandon J. Aponte, Isabella M. L. Heffernan, Devika Madgavkar, Ryan Marshall Felder, Lisa Soleymani Lehmann, Megan J. Palmer, Hank Greely, Russell Wald, Lea Goetz, Markus Trengove, Robert Vandersluis, Herbert Lin, Mildred K. Cho, Russ B. Altman, Drew Endy, David A. Relman, Margaret Levi, Debra Satz, David Magnus","doi":"10.1038/s42256-024-00926-3","DOIUrl":null,"url":null,"abstract":"<p>The rapid advancement of artificial intelligence (AI) in biomedical research presents considerable potential for misuse, including authoritarian surveillance, data misuse, bioweapon development, increase in inequity and abuse of privacy. We propose a multi-pronged framework for researchers to mitigate these risks, looking first to existing ethical frameworks and regulatory measures researchers can adapt to their own work, next to off-the-shelf AI solutions, then to design-specific solutions researchers can build into their AI to mitigate misuse. When researchers remain unable to address the potential for harmful misuse, and the risks outweigh potential benefits, we recommend researchers consider a different approach to answering their research question, or a new research question if the risks remain too great. We apply this framework to three different domains of AI research where misuse is likely to be problematic: (1) AI for drug and chemical discovery; (2) generative models for synthetic data; (3) ambient intelligence.</p>","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":"19 1","pages":""},"PeriodicalIF":18.8000,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Nature Machine Intelligence","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1038/s42256-024-00926-3","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

The rapid advancement of artificial intelligence (AI) in biomedical research presents considerable potential for misuse, including authoritarian surveillance, data misuse, bioweapon development, increase in inequity and abuse of privacy. We propose a multi-pronged framework for researchers to mitigate these risks, looking first to existing ethical frameworks and regulatory measures researchers can adapt to their own work, next to off-the-shelf AI solutions, then to design-specific solutions researchers can build into their AI to mitigate misuse. When researchers remain unable to address the potential for harmful misuse, and the risks outweigh potential benefits, we recommend researchers consider a different approach to answering their research question, or a new research question if the risks remain too great. We apply this framework to three different domains of AI research where misuse is likely to be problematic: (1) AI for drug and chemical discovery; (2) generative models for synthetic data; (3) ambient intelligence.

Abstract Image

为降低生物医学研究中可能滥用人工智能的风险制定框架
人工智能(AI)在生物医学研究领域的快速发展带来了相当大的滥用潜力,包括专制监控、数据滥用、生物武器开发、不公平现象加剧和隐私滥用。我们为研究人员提出了一个多管齐下的框架来降低这些风险,首先是研究人员可以根据自己的工作调整现有的伦理框架和监管措施,其次是现成的人工智能解决方案,然后是研究人员可以在其人工智能中构建特定设计的解决方案,以减少滥用。如果研究人员仍然无法解决潜在的有害误用问题,并且风险大于潜在收益,我们建议研究人员考虑采用不同的方法来回答他们的研究问题,如果风险仍然太大,则考虑提出新的研究问题。我们将这一框架应用于可能出现滥用问题的三个不同的人工智能研究领域:(1) 药物和化学发现人工智能;(2) 合成数据生成模型;(3) 环境智能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
36.90
自引率
2.10%
发文量
127
期刊介绍: Nature Machine Intelligence is a distinguished publication that presents original research and reviews on various topics in machine learning, robotics, and AI. Our focus extends beyond these fields, exploring their profound impact on other scientific disciplines, as well as societal and industrial aspects. We recognize limitless possibilities wherein machine intelligence can augment human capabilities and knowledge in domains like scientific exploration, healthcare, medical diagnostics, and the creation of safe and sustainable cities, transportation, and agriculture. Simultaneously, we acknowledge the emergence of ethical, social, and legal concerns due to the rapid pace of advancements. To foster interdisciplinary discussions on these far-reaching implications, Nature Machine Intelligence serves as a platform for dialogue facilitated through Comments, News Features, News & Views articles, and Correspondence. Our goal is to encourage a comprehensive examination of these subjects. Similar to all Nature-branded journals, Nature Machine Intelligence operates under the guidance of a team of skilled editors. We adhere to a fair and rigorous peer-review process, ensuring high standards of copy-editing and production, swift publication, and editorial independence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信