Lesedi Mamodise Modise, Mahsa Alborzi Avanaki, Saleem Ameen, Leo A Celi, Victor Xin Yuan Chen, Ashley Cordes, Matthew Elmore, Amelia Fiske, Jack Gallifant, Megan Hayes, Alvin Marcelo, Joao Matos, Luis Nakayama, Ezinwanne Ozoani, Benjamin C Silverman, Donnella S Comeau
{"title":"团队卡介绍:在复杂时代加强对医疗人工智能(AI)系统的治理。","authors":"Lesedi Mamodise Modise, Mahsa Alborzi Avanaki, Saleem Ameen, Leo A Celi, Victor Xin Yuan Chen, Ashley Cordes, Matthew Elmore, Amelia Fiske, Jack Gallifant, Megan Hayes, Alvin Marcelo, Joao Matos, Luis Nakayama, Ezinwanne Ozoani, Benjamin C Silverman, Donnella S Comeau","doi":"10.1371/journal.pdig.0000495","DOIUrl":null,"url":null,"abstract":"<p><p>This paper introduces the Team Card (TC) as a protocol to address harmful biases in the development of clinical artificial intelligence (AI) systems by emphasizing the often-overlooked role of researchers' positionality. While harmful bias in medical AI, particularly in Clinical Decision Support (CDS) tools, is frequently attributed to issues of data quality, this limited framing neglects how researchers' worldviews-shaped by their training, backgrounds, and experiences-can influence AI design and deployment. These unexamined subjectivities can create epistemic limitations, amplifying biases and increasing the risk of inequitable applications in clinical settings. The TC emphasizes reflexivity-critical self-reflection-as an ethical strategy to identify and address biases stemming from the subjectivity of research teams. By systematically documenting team composition, positionality, and the steps taken to monitor and address unconscious bias, TCs establish a framework for assessing how diversity within teams impacts AI development. Studies across business, science, and organizational contexts demonstrate that diversity improves outcomes, including innovation, decision-making quality, and overall performance. However, epistemic diversity-diverse ways of thinking and problem-solving-must be actively cultivated through intentional, collaborative processes to mitigate bias effectively. By embedding epistemic diversity into research practices, TCs may enhance model performance, improve fairness and offer an empirical basis for evaluating how diversity influences bias mitigation efforts over time. This represents a critical step toward developing inclusive, ethical, and effective AI systems in clinical care. A publicly available prototype presenting our TC is accessible at https://www.teamcard.io/team/demo.</p>","PeriodicalId":74465,"journal":{"name":"PLOS digital health","volume":"4 3","pages":"e0000495"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11878906/pdf/","citationCount":"0","resultStr":"{\"title\":\"Introducing the Team Card: Enhancing governance for medical Artificial Intelligence (AI) systems in the age of complexity.\",\"authors\":\"Lesedi Mamodise Modise, Mahsa Alborzi Avanaki, Saleem Ameen, Leo A Celi, Victor Xin Yuan Chen, Ashley Cordes, Matthew Elmore, Amelia Fiske, Jack Gallifant, Megan Hayes, Alvin Marcelo, Joao Matos, Luis Nakayama, Ezinwanne Ozoani, Benjamin C Silverman, Donnella S Comeau\",\"doi\":\"10.1371/journal.pdig.0000495\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>This paper introduces the Team Card (TC) as a protocol to address harmful biases in the development of clinical artificial intelligence (AI) systems by emphasizing the often-overlooked role of researchers' positionality. While harmful bias in medical AI, particularly in Clinical Decision Support (CDS) tools, is frequently attributed to issues of data quality, this limited framing neglects how researchers' worldviews-shaped by their training, backgrounds, and experiences-can influence AI design and deployment. These unexamined subjectivities can create epistemic limitations, amplifying biases and increasing the risk of inequitable applications in clinical settings. The TC emphasizes reflexivity-critical self-reflection-as an ethical strategy to identify and address biases stemming from the subjectivity of research teams. By systematically documenting team composition, positionality, and the steps taken to monitor and address unconscious bias, TCs establish a framework for assessing how diversity within teams impacts AI development. Studies across business, science, and organizational contexts demonstrate that diversity improves outcomes, including innovation, decision-making quality, and overall performance. However, epistemic diversity-diverse ways of thinking and problem-solving-must be actively cultivated through intentional, collaborative processes to mitigate bias effectively. By embedding epistemic diversity into research practices, TCs may enhance model performance, improve fairness and offer an empirical basis for evaluating how diversity influences bias mitigation efforts over time. This represents a critical step toward developing inclusive, ethical, and effective AI systems in clinical care. A publicly available prototype presenting our TC is accessible at https://www.teamcard.io/team/demo.</p>\",\"PeriodicalId\":74465,\"journal\":{\"name\":\"PLOS digital health\",\"volume\":\"4 3\",\"pages\":\"e0000495\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-03-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11878906/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"PLOS digital health\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1371/journal.pdig.0000495\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/3/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"PLOS digital health","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1371/journal.pdig.0000495","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/3/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
Introducing the Team Card: Enhancing governance for medical Artificial Intelligence (AI) systems in the age of complexity.
This paper introduces the Team Card (TC) as a protocol to address harmful biases in the development of clinical artificial intelligence (AI) systems by emphasizing the often-overlooked role of researchers' positionality. While harmful bias in medical AI, particularly in Clinical Decision Support (CDS) tools, is frequently attributed to issues of data quality, this limited framing neglects how researchers' worldviews-shaped by their training, backgrounds, and experiences-can influence AI design and deployment. These unexamined subjectivities can create epistemic limitations, amplifying biases and increasing the risk of inequitable applications in clinical settings. The TC emphasizes reflexivity-critical self-reflection-as an ethical strategy to identify and address biases stemming from the subjectivity of research teams. By systematically documenting team composition, positionality, and the steps taken to monitor and address unconscious bias, TCs establish a framework for assessing how diversity within teams impacts AI development. Studies across business, science, and organizational contexts demonstrate that diversity improves outcomes, including innovation, decision-making quality, and overall performance. However, epistemic diversity-diverse ways of thinking and problem-solving-must be actively cultivated through intentional, collaborative processes to mitigate bias effectively. By embedding epistemic diversity into research practices, TCs may enhance model performance, improve fairness and offer an empirical basis for evaluating how diversity influences bias mitigation efforts over time. This represents a critical step toward developing inclusive, ethical, and effective AI systems in clinical care. A publicly available prototype presenting our TC is accessible at https://www.teamcard.io/team/demo.