Leveraging artificial intelligence to detect ethical concerns in medical research: a case study.

IF 3.3 2区 哲学 Q1 ETHICS
Kannan Sridharan, Gowri Sivaramakrishnan
{"title":"Leveraging artificial intelligence to detect ethical concerns in medical research: a case study.","authors":"Kannan Sridharan, Gowri Sivaramakrishnan","doi":"10.1136/jme-2023-109767","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Institutional review boards (IRBs) have been criticised for delays in approvals for research proposals due to inadequate or inexperienced IRB staff. Artificial intelligence (AI), particularly large language models (LLMs), has significant potential to assist IRB members in a prompt and efficient reviewing process.</p><p><strong>Methods: </strong>Four LLMs were evaluated on whether they could identify potential ethical issues in seven validated case studies. The LLMs were prompted with queries related to the proposed eligibility criteria of the study participants, vulnerability issues, information to be disclosed in the informed consent document (ICD), risk-benefit assessment and justification of the use of a placebo. Another query was issued to the LLMs to generate ICDs for these case scenarios.</p><p><strong>Results: </strong>All four LLMs were able to provide answers to the queries related to all seven cases. In general, the responses were homogeneous with respect to most elements. LLMs performed suboptimally in identifying the suitability of the placebo arm, risk mitigation strategies and potential risks to study participants in certain case studies with a single prompt. However, multiple prompts led to better outputs in all of these domains. Each of the LLMs included all of the fundamental elements of the ICD for all case scenarios. Use of jargon, understatement of benefits and failure to state potential risks were the key observations in the AI-generated ICD.</p><p><strong>Conclusion: </strong>It is likely that LLMs can enhance the identification of potential ethical issues in clinical research, and they can be used as an adjunct tool to prescreen research proposals and enhance the efficiency of an IRB.</p>","PeriodicalId":16317,"journal":{"name":"Journal of Medical Ethics","volume":" ","pages":"126-134"},"PeriodicalIF":3.3000,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Ethics","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1136/jme-2023-109767","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Institutional review boards (IRBs) have been criticised for delays in approvals for research proposals due to inadequate or inexperienced IRB staff. Artificial intelligence (AI), particularly large language models (LLMs), has significant potential to assist IRB members in a prompt and efficient reviewing process.

Methods: Four LLMs were evaluated on whether they could identify potential ethical issues in seven validated case studies. The LLMs were prompted with queries related to the proposed eligibility criteria of the study participants, vulnerability issues, information to be disclosed in the informed consent document (ICD), risk-benefit assessment and justification of the use of a placebo. Another query was issued to the LLMs to generate ICDs for these case scenarios.

Results: All four LLMs were able to provide answers to the queries related to all seven cases. In general, the responses were homogeneous with respect to most elements. LLMs performed suboptimally in identifying the suitability of the placebo arm, risk mitigation strategies and potential risks to study participants in certain case studies with a single prompt. However, multiple prompts led to better outputs in all of these domains. Each of the LLMs included all of the fundamental elements of the ICD for all case scenarios. Use of jargon, understatement of benefits and failure to state potential risks were the key observations in the AI-generated ICD.

Conclusion: It is likely that LLMs can enhance the identification of potential ethical issues in clinical research, and they can be used as an adjunct tool to prescreen research proposals and enhance the efficiency of an IRB.

利用人工智能检测医学研究中的伦理问题:案例研究。
背景:机构审查委员会(IRB)因其工作人员不足或缺乏经验而导致研究提案审批延误而饱受诟病。人工智能(AI),尤其是大型语言模型(LLMs),在协助机构审查委员会成员迅速高效地完成审查过程方面具有巨大潜力:方法:对四种 LLM 进行了评估,看它们能否在七项经过验证的案例研究中发现潜在的伦理问题。向 LLM 提出的问题涉及研究参与者的拟议资格标准、脆弱性问题、知情同意文件 (ICD) 中应披露的信息、风险效益评估以及使用安慰剂的理由。我们还向法律硕士发出了另一个查询,以便为这些案例情景生成 ICD:结果:所有四位实验室管理员都能回答与所有七个病例有关的询问。总的来说,大多数要素的答案都是相同的。在某些案例研究中,LLM 在识别安慰剂治疗组的适宜性、风险缓解策略和对研究参与者的潜在风险方面表现欠佳。然而,多重提示在所有这些方面都有更好的结果。每份 LLM 都包含了所有案例情景中 ICD 的所有基本要素。在人工智能生成的 ICD 中,使用专业术语、低估益处和未说明潜在风险是主要问题:结论:LLM 有可能加强对临床研究中潜在伦理问题的识别,并可作为预审研究提案和提高 IRB 效率的辅助工具。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Medical Ethics
Journal of Medical Ethics 医学-医学:伦理
CiteScore
7.80
自引率
9.80%
发文量
164
审稿时长
4-8 weeks
期刊介绍: Journal of Medical Ethics is a leading international journal that reflects the whole field of medical ethics. The journal seeks to promote ethical reflection and conduct in scientific research and medical practice. It features articles on various ethical aspects of health care relevant to health care professionals, members of clinical ethics committees, medical ethics professionals, researchers and bioscientists, policy makers and patients. Subscribers to the Journal of Medical Ethics also receive Medical Humanities journal at no extra cost. JME is the official journal of the Institute of Medical Ethics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信