Introducing the Team Card: Enhancing governance for medical Artificial Intelligence (AI) systems in the age of complexity.

PLOS digital health Pub Date : 2025-03-04 eCollection Date: 2025-03-01 DOI:10.1371/journal.pdig.0000495
Lesedi Mamodise Modise, Mahsa Alborzi Avanaki, Saleem Ameen, Leo A Celi, Victor Xin Yuan Chen, Ashley Cordes, Matthew Elmore, Amelia Fiske, Jack Gallifant, Megan Hayes, Alvin Marcelo, Joao Matos, Luis Nakayama, Ezinwanne Ozoani, Benjamin C Silverman, Donnella S Comeau
{"title":"Introducing the Team Card: Enhancing governance for medical Artificial Intelligence (AI) systems in the age of complexity.","authors":"Lesedi Mamodise Modise, Mahsa Alborzi Avanaki, Saleem Ameen, Leo A Celi, Victor Xin Yuan Chen, Ashley Cordes, Matthew Elmore, Amelia Fiske, Jack Gallifant, Megan Hayes, Alvin Marcelo, Joao Matos, Luis Nakayama, Ezinwanne Ozoani, Benjamin C Silverman, Donnella S Comeau","doi":"10.1371/journal.pdig.0000495","DOIUrl":null,"url":null,"abstract":"<p><p>This paper introduces the Team Card (TC) as a protocol to address harmful biases in the development of clinical artificial intelligence (AI) systems by emphasizing the often-overlooked role of researchers' positionality. While harmful bias in medical AI, particularly in Clinical Decision Support (CDS) tools, is frequently attributed to issues of data quality, this limited framing neglects how researchers' worldviews-shaped by their training, backgrounds, and experiences-can influence AI design and deployment. These unexamined subjectivities can create epistemic limitations, amplifying biases and increasing the risk of inequitable applications in clinical settings. The TC emphasizes reflexivity-critical self-reflection-as an ethical strategy to identify and address biases stemming from the subjectivity of research teams. By systematically documenting team composition, positionality, and the steps taken to monitor and address unconscious bias, TCs establish a framework for assessing how diversity within teams impacts AI development. Studies across business, science, and organizational contexts demonstrate that diversity improves outcomes, including innovation, decision-making quality, and overall performance. However, epistemic diversity-diverse ways of thinking and problem-solving-must be actively cultivated through intentional, collaborative processes to mitigate bias effectively. By embedding epistemic diversity into research practices, TCs may enhance model performance, improve fairness and offer an empirical basis for evaluating how diversity influences bias mitigation efforts over time. This represents a critical step toward developing inclusive, ethical, and effective AI systems in clinical care. A publicly available prototype presenting our TC is accessible at https://www.teamcard.io/team/demo.</p>","PeriodicalId":74465,"journal":{"name":"PLOS digital health","volume":"4 3","pages":"e0000495"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11878906/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"PLOS digital health","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1371/journal.pdig.0000495","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/3/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This paper introduces the Team Card (TC) as a protocol to address harmful biases in the development of clinical artificial intelligence (AI) systems by emphasizing the often-overlooked role of researchers' positionality. While harmful bias in medical AI, particularly in Clinical Decision Support (CDS) tools, is frequently attributed to issues of data quality, this limited framing neglects how researchers' worldviews-shaped by their training, backgrounds, and experiences-can influence AI design and deployment. These unexamined subjectivities can create epistemic limitations, amplifying biases and increasing the risk of inequitable applications in clinical settings. The TC emphasizes reflexivity-critical self-reflection-as an ethical strategy to identify and address biases stemming from the subjectivity of research teams. By systematically documenting team composition, positionality, and the steps taken to monitor and address unconscious bias, TCs establish a framework for assessing how diversity within teams impacts AI development. Studies across business, science, and organizational contexts demonstrate that diversity improves outcomes, including innovation, decision-making quality, and overall performance. However, epistemic diversity-diverse ways of thinking and problem-solving-must be actively cultivated through intentional, collaborative processes to mitigate bias effectively. By embedding epistemic diversity into research practices, TCs may enhance model performance, improve fairness and offer an empirical basis for evaluating how diversity influences bias mitigation efforts over time. This represents a critical step toward developing inclusive, ethical, and effective AI systems in clinical care. A publicly available prototype presenting our TC is accessible at https://www.teamcard.io/team/demo.

团队卡介绍:在复杂时代加强对医疗人工智能(AI)系统的治理。
本文介绍了团队卡(TC)作为解决临床人工智能(AI)系统开发中有害偏见的协议,强调了研究人员的立场经常被忽视的作用。虽然医疗人工智能中的有害偏见,特别是在临床决策支持(CDS)工具中,经常归因于数据质量问题,但这种有限的框架忽略了研究人员的世界观(由他们的培训、背景和经验形成)如何影响人工智能的设计和部署。这些未经检验的主观性可能造成认知限制,放大偏见,并增加临床环境中不公平应用的风险。TC强调反身性——批判性的自我反思——作为一种识别和解决研究团队主观性产生的偏见的伦理策略。通过系统地记录团队组成、定位以及监控和解决无意识偏见所采取的步骤,技术人员建立了一个框架,用于评估团队内部的多样性如何影响人工智能开发。对商业、科学和组织环境的研究表明,多样性可以改善成果,包括创新、决策质量和整体绩效。然而,认知多样性——思考和解决问题的不同方式——必须通过有意的、协作的过程来积极培养,以有效地减轻偏见。通过将认知多样性嵌入到研究实践中,认知多样性可以提高模型性能,提高公平性,并为评估多样性如何随着时间的推移影响偏见缓解工作提供经验基础。这是朝着在临床护理中开发包容、道德和有效的人工智能系统迈出的关键一步。展示我们的TC的公开原型可以在https://www.teamcard.io/team/demo上访问。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信