{"title":"人工智能和 ML 医疗模型中的 \"肮脏 \"混杂因素的人类分类:宗教的作用","authors":"Y. Rusinovich, V. Rusinovich","doi":"10.62487/2rm68r13","DOIUrl":null,"url":null,"abstract":"Aim: This study was conducted to evaluate the acceptance among healthcare practitioners and scientific researchers of the current official regulatory recommendations regarding the incorporation of human categorization through confounders, such as “Religion”, into AI and ML-based clinical research and healthcare settings. Materials and Methods: An anonymous online survey was conducted using the Telegram platform, where participants were asked a single question: \"Do you consider the inclusion of Religious status in Artificial Intelligence and Machine Learning models justified from the perspective of medical ethics and science?\" Respondents were provided with only two response options: \"Yes\" or \"No.\" This survey was specifically targeted at international groups, focusing primarily on English and Russian-speaking clinicians and scientific researchers. Results: 134 unique individuals participated in the survey. The results revealed that two-third of the respondents (87 individuals) agreed that including Religion status as predictor in the ML and AI models is inappropriate. Conclusion: Two-thirds of healthcare practitioners and scientific researchers agree that categorizing individuals within healthcare settings based on their religion is inappropriate. Educational programs are needed to inform healthcare and scientific professionals that AI and ML applications should be built on unbiased and ethically appropriate predictors. ML is incapable of distinguishing individual human characteristics. Therefore, constructing healthcare AI and ML models based on confounders like religion is unlikely to aid in identifying the cause of or treating any pathology or disease. Moreover, the high conflict potential of this predictor may further deepen societal disparities.","PeriodicalId":518288,"journal":{"name":"Web3 Journal: ML in Health Science","volume":"190 ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Human Categorization with “Dirty” Confounders in AI and ML Medical Models: The Role of Religion\",\"authors\":\"Y. Rusinovich, V. Rusinovich\",\"doi\":\"10.62487/2rm68r13\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Aim: This study was conducted to evaluate the acceptance among healthcare practitioners and scientific researchers of the current official regulatory recommendations regarding the incorporation of human categorization through confounders, such as “Religion”, into AI and ML-based clinical research and healthcare settings. Materials and Methods: An anonymous online survey was conducted using the Telegram platform, where participants were asked a single question: \\\"Do you consider the inclusion of Religious status in Artificial Intelligence and Machine Learning models justified from the perspective of medical ethics and science?\\\" Respondents were provided with only two response options: \\\"Yes\\\" or \\\"No.\\\" This survey was specifically targeted at international groups, focusing primarily on English and Russian-speaking clinicians and scientific researchers. Results: 134 unique individuals participated in the survey. The results revealed that two-third of the respondents (87 individuals) agreed that including Religion status as predictor in the ML and AI models is inappropriate. Conclusion: Two-thirds of healthcare practitioners and scientific researchers agree that categorizing individuals within healthcare settings based on their religion is inappropriate. Educational programs are needed to inform healthcare and scientific professionals that AI and ML applications should be built on unbiased and ethically appropriate predictors. ML is incapable of distinguishing individual human characteristics. Therefore, constructing healthcare AI and ML models based on confounders like religion is unlikely to aid in identifying the cause of or treating any pathology or disease. Moreover, the high conflict potential of this predictor may further deepen societal disparities.\",\"PeriodicalId\":518288,\"journal\":{\"name\":\"Web3 Journal: ML in Health Science\",\"volume\":\"190 \",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-02-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Web3 Journal: ML in Health Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.62487/2rm68r13\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Web3 Journal: ML in Health Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.62487/2rm68r13","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
目的:本研究旨在评估医疗保健从业人员和科学研究人员对当前官方监管建议的接受程度,这些建议涉及通过 "宗教 "等混杂因素将人类分类纳入基于人工智能和 ML 的临床研究和医疗保健环境。材料与方法:使用 Telegram 平台进行匿名在线调查,向参与者提出一个问题:"从医学伦理和科学的角度来看,您认为将宗教信仰纳入人工智能和机器学习模型是否合理?受访者只有两个回答选项:是 "或 "否"。这项调查专门针对国际群体,主要侧重于讲英语和俄语的临床医生和科学研究人员。调查结果共有 134 人参与了调查。结果显示,三分之二的受访者(87 人)同意将宗教状况作为预测因素纳入 ML 和 AI 模型是不恰当的。结论三分之二的医疗从业人员和科学研究人员都认为,根据宗教信仰对医疗机构中的个人进行分类是不恰当的。有必要开展教育计划,告知医疗保健和科研专业人员,人工智能和 ML 应用应建立在无偏见且符合道德规范的预测指标基础上。人工智能无法区分人类的个体特征。因此,基于宗教等混杂因素构建医疗人工智能和 ML 模型不太可能有助于确定病因或治疗任何病理或疾病。此外,这种预测因素的高冲突可能性可能会进一步加深社会差异。
Human Categorization with “Dirty” Confounders in AI and ML Medical Models: The Role of Religion
Aim: This study was conducted to evaluate the acceptance among healthcare practitioners and scientific researchers of the current official regulatory recommendations regarding the incorporation of human categorization through confounders, such as “Religion”, into AI and ML-based clinical research and healthcare settings. Materials and Methods: An anonymous online survey was conducted using the Telegram platform, where participants were asked a single question: "Do you consider the inclusion of Religious status in Artificial Intelligence and Machine Learning models justified from the perspective of medical ethics and science?" Respondents were provided with only two response options: "Yes" or "No." This survey was specifically targeted at international groups, focusing primarily on English and Russian-speaking clinicians and scientific researchers. Results: 134 unique individuals participated in the survey. The results revealed that two-third of the respondents (87 individuals) agreed that including Religion status as predictor in the ML and AI models is inappropriate. Conclusion: Two-thirds of healthcare practitioners and scientific researchers agree that categorizing individuals within healthcare settings based on their religion is inappropriate. Educational programs are needed to inform healthcare and scientific professionals that AI and ML applications should be built on unbiased and ethically appropriate predictors. ML is incapable of distinguishing individual human characteristics. Therefore, constructing healthcare AI and ML models based on confounders like religion is unlikely to aid in identifying the cause of or treating any pathology or disease. Moreover, the high conflict potential of this predictor may further deepen societal disparities.