Informing the ethical review of human subjects research utilizing artificial intelligence

IF 2.4 Q3 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
Christos Andreas Makridis, Anthony Boese, Rafael Fricks, Don Workman, Molly Klote, Joshua Mueller, Isabel J. Hildebrandt, Michael Kim, Gil Alterovitz
{"title":"Informing the ethical review of human subjects research utilizing artificial intelligence","authors":"Christos Andreas Makridis, Anthony Boese, Rafael Fricks, Don Workman, Molly Klote, Joshua Mueller, Isabel J. Hildebrandt, Michael Kim, Gil Alterovitz","doi":"10.3389/fcomp.2023.1235226","DOIUrl":null,"url":null,"abstract":"Introduction The rapid expansion of artificial intelligence (AI) has produced many opportunities, but also new risks that must be actively managed, particularly in the health care sector with clinical practice to avoid unintended health, economic, and social consequences. Methods Given that much of the research and development (R&D) involving human subjects is reviewed and rigorously monitored by institutional review boards (IRBs), we argue that supplemental questions added to the IRB process is an efficient risk mitigation technique available for immediate use. To facilitate this, we introduce AI supplemental questions that provide a feasible, low-disruption mechanism for IRBs to elicit information necessary to inform the review of AI proposals. These questions will also be relevant to review of research using AI that is exempt from the requirement of IRB review. We pilot the questions within the Department of Veterans Affairs–the nation's largest integrated healthcare system–and demonstrate its efficacy in risk mitigation through providing vital information in a way accessible to non-AI subject matter experts responsible for reviewing IRB proposals. We provide these questions for other organizations to adapt to fit their needs and are further developing these questions into an AI IRB module with an extended application, review checklist, informed consent, and other informational materials. Results We find that the supplemental AI IRB module further streamlines and expedites the review of IRB projects. We also find that the module has a positive effect on reviewers' attitudes and ease of assessing the potential alignment and risks associated with proposed projects. Discussion As projects increasingly contain an AI component, streamlining their review and assessment is important to avoid posing too large of a burden on IRBs in their review of submissions. In addition, establishing a minimum standard that submissions must adhere to will help ensure that all projects are at least aware of potential risks unique to AI and dialogue with their local IRBs over them. Further work is needed to apply these concepts to other non-IRB pathways, like quality improvement projects.","PeriodicalId":52823,"journal":{"name":"Frontiers in Computer Science","volume":null,"pages":null},"PeriodicalIF":2.4000,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Computer Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/fcomp.2023.1235226","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

Introduction The rapid expansion of artificial intelligence (AI) has produced many opportunities, but also new risks that must be actively managed, particularly in the health care sector with clinical practice to avoid unintended health, economic, and social consequences. Methods Given that much of the research and development (R&D) involving human subjects is reviewed and rigorously monitored by institutional review boards (IRBs), we argue that supplemental questions added to the IRB process is an efficient risk mitigation technique available for immediate use. To facilitate this, we introduce AI supplemental questions that provide a feasible, low-disruption mechanism for IRBs to elicit information necessary to inform the review of AI proposals. These questions will also be relevant to review of research using AI that is exempt from the requirement of IRB review. We pilot the questions within the Department of Veterans Affairs–the nation's largest integrated healthcare system–and demonstrate its efficacy in risk mitigation through providing vital information in a way accessible to non-AI subject matter experts responsible for reviewing IRB proposals. We provide these questions for other organizations to adapt to fit their needs and are further developing these questions into an AI IRB module with an extended application, review checklist, informed consent, and other informational materials. Results We find that the supplemental AI IRB module further streamlines and expedites the review of IRB projects. We also find that the module has a positive effect on reviewers' attitudes and ease of assessing the potential alignment and risks associated with proposed projects. Discussion As projects increasingly contain an AI component, streamlining their review and assessment is important to avoid posing too large of a burden on IRBs in their review of submissions. In addition, establishing a minimum standard that submissions must adhere to will help ensure that all projects are at least aware of potential risks unique to AI and dialogue with their local IRBs over them. Further work is needed to apply these concepts to other non-IRB pathways, like quality improvement projects.
对利用人工智能进行的人类受试者研究进行伦理审查
人工智能(AI)的快速发展带来了许多机遇,但也带来了必须积极管理的新风险,特别是在具有临床实践的卫生保健部门,以避免意外的健康、经济和社会后果。方法考虑到许多涉及人类受试者的研究和开发(R&D)都是由机构审查委员会(IRB)审查和严格监控的,我们认为,在IRB过程中添加补充问题是一种可立即使用的有效风险缓解技术。为了促进这一点,我们引入了人工智能补充问题,为irb提供了一个可行的、低干扰的机制,以获取必要的信息,为人工智能提案的审查提供信息。这些问题也将与审查使用人工智能的研究有关,这些研究不受IRB审查的要求。我们在退伍军人事务部(美国最大的综合医疗保健系统)内试点了这些问题,并通过向负责审查IRB提案的非人工智能主题专家提供重要信息的方式,证明了其在降低风险方面的有效性。我们为其他组织提供这些问题,以适应他们的需求,并进一步将这些问题开发成AI IRB模块,其中包含扩展应用程序、审查清单、知情同意和其他信息材料。结果我们发现补充AI IRB模块进一步简化和加快了对IRB项目的审查。我们还发现,该模块对审稿人的态度和评估与拟议项目相关的潜在一致性和风险的便利性有积极的影响。随着项目越来越多地包含人工智能组件,简化它们的审查和评估对于避免在审查提交时给irb带来过大的负担非常重要。此外,建立提交必须遵守的最低标准将有助于确保所有项目至少意识到人工智能特有的潜在风险,并与当地的irb就这些风险进行对话。需要进一步的工作将这些概念应用于其他非内部审查途径,如质量改进项目。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Frontiers in Computer Science
Frontiers in Computer Science COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS-
CiteScore
4.30
自引率
0.00%
发文量
152
审稿时长
13 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信