Human Categorization with “Dirty” Confounders in AI and ML Medical Models: The Role of Religion

Y. Rusinovich, V. Rusinovich
{"title":"Human Categorization with “Dirty” Confounders in AI and ML Medical Models: The Role of Religion","authors":"Y. Rusinovich, V. Rusinovich","doi":"10.62487/2rm68r13","DOIUrl":null,"url":null,"abstract":"Aim: This study was conducted to evaluate the acceptance among healthcare practitioners and scientific researchers of the current official regulatory recommendations regarding the incorporation of human categorization through confounders, such as “Religion”, into AI and ML-based clinical research and healthcare settings. Materials and Methods: An anonymous online survey was conducted using the Telegram platform, where participants were asked a single question: \"Do you consider the inclusion of Religious status in Artificial Intelligence and Machine Learning models justified from the perspective of medical ethics and science?\" Respondents were provided with only two response options: \"Yes\" or \"No.\" This survey was specifically targeted at international groups, focusing primarily on English and Russian-speaking clinicians and scientific researchers. Results: 134 unique individuals participated in the survey. The results revealed that two-third of the respondents (87 individuals) agreed that including Religion status as predictor in the ML and AI models is inappropriate. Conclusion: Two-thirds of healthcare practitioners and scientific researchers agree that categorizing individuals within healthcare settings based on their religion is inappropriate. Educational programs are needed to inform healthcare and scientific professionals that AI and ML applications should be built on unbiased and ethically appropriate predictors. ML is incapable of distinguishing individual human characteristics. Therefore, constructing healthcare AI and ML models based on confounders like religion is unlikely to aid in identifying the cause of or treating any pathology or disease. Moreover, the high conflict potential of this predictor may further deepen societal disparities.","PeriodicalId":518288,"journal":{"name":"Web3 Journal: ML in Health Science","volume":"190 ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Web3 Journal: ML in Health Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.62487/2rm68r13","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Aim: This study was conducted to evaluate the acceptance among healthcare practitioners and scientific researchers of the current official regulatory recommendations regarding the incorporation of human categorization through confounders, such as “Religion”, into AI and ML-based clinical research and healthcare settings. Materials and Methods: An anonymous online survey was conducted using the Telegram platform, where participants were asked a single question: "Do you consider the inclusion of Religious status in Artificial Intelligence and Machine Learning models justified from the perspective of medical ethics and science?" Respondents were provided with only two response options: "Yes" or "No." This survey was specifically targeted at international groups, focusing primarily on English and Russian-speaking clinicians and scientific researchers. Results: 134 unique individuals participated in the survey. The results revealed that two-third of the respondents (87 individuals) agreed that including Religion status as predictor in the ML and AI models is inappropriate. Conclusion: Two-thirds of healthcare practitioners and scientific researchers agree that categorizing individuals within healthcare settings based on their religion is inappropriate. Educational programs are needed to inform healthcare and scientific professionals that AI and ML applications should be built on unbiased and ethically appropriate predictors. ML is incapable of distinguishing individual human characteristics. Therefore, constructing healthcare AI and ML models based on confounders like religion is unlikely to aid in identifying the cause of or treating any pathology or disease. Moreover, the high conflict potential of this predictor may further deepen societal disparities.
人工智能和 ML 医疗模型中的 "肮脏 "混杂因素的人类分类:宗教的作用
目的:本研究旨在评估医疗保健从业人员和科学研究人员对当前官方监管建议的接受程度,这些建议涉及通过 "宗教 "等混杂因素将人类分类纳入基于人工智能和 ML 的临床研究和医疗保健环境。材料与方法:使用 Telegram 平台进行匿名在线调查,向参与者提出一个问题:"从医学伦理和科学的角度来看,您认为将宗教信仰纳入人工智能和机器学习模型是否合理?受访者只有两个回答选项:是 "或 "否"。这项调查专门针对国际群体,主要侧重于讲英语和俄语的临床医生和科学研究人员。调查结果共有 134 人参与了调查。结果显示,三分之二的受访者(87 人)同意将宗教状况作为预测因素纳入 ML 和 AI 模型是不恰当的。结论三分之二的医疗从业人员和科学研究人员都认为,根据宗教信仰对医疗机构中的个人进行分类是不恰当的。有必要开展教育计划,告知医疗保健和科研专业人员,人工智能和 ML 应用应建立在无偏见且符合道德规范的预测指标基础上。人工智能无法区分人类的个体特征。因此,基于宗教等混杂因素构建医疗人工智能和 ML 模型不太可能有助于确定病因或治疗任何病理或疾病。此外,这种预测因素的高冲突可能性可能会进一步加深社会差异。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信