{"title":"心理健康诊断的机器学习:认知不公正的权力和延伸。","authors":"Jonathan McCabe","doi":"10.1136/jme-2025-111023","DOIUrl":null,"url":null,"abstract":"<p><p>Following the article of Ugar and Malele, Pozzi and De Proost provide a necessary addition to the discussion around machine learning (ML) in mental health diagnosis. However, their analysis of contributory injustice, a type of injustice in which the dominant social group does not recognise the epistemic offerings of the minority, provides only an introduction to the significant harm that can stem from the intersection of ML and culture. Not only does ML, untrained in certain cultural contexts, provide the broad dismissal of a group's values, but it also manifests in more direct, personal forms of harm. Testimonial injustice is a particularly troubling dimension of ML's integration into psychiatry, and this paper will examine how social power operates within these systems. First, it is necessary to understand ML as a social entity. Then, through an analysis of Fricker's account of social identity and an extension of this through Brey's framework of the social power of technology, a more complete appreciation of the epistemic injustice caused by ML algorithms in healthcare can be reached.</p>","PeriodicalId":16317,"journal":{"name":"Journal of Medical Ethics","volume":" ","pages":""},"PeriodicalIF":3.4000,"publicationDate":"2025-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Machine learning for mental health diagnosis: power and extensions of epistemic injustice.\",\"authors\":\"Jonathan McCabe\",\"doi\":\"10.1136/jme-2025-111023\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Following the article of Ugar and Malele, Pozzi and De Proost provide a necessary addition to the discussion around machine learning (ML) in mental health diagnosis. However, their analysis of contributory injustice, a type of injustice in which the dominant social group does not recognise the epistemic offerings of the minority, provides only an introduction to the significant harm that can stem from the intersection of ML and culture. Not only does ML, untrained in certain cultural contexts, provide the broad dismissal of a group's values, but it also manifests in more direct, personal forms of harm. Testimonial injustice is a particularly troubling dimension of ML's integration into psychiatry, and this paper will examine how social power operates within these systems. First, it is necessary to understand ML as a social entity. Then, through an analysis of Fricker's account of social identity and an extension of this through Brey's framework of the social power of technology, a more complete appreciation of the epistemic injustice caused by ML algorithms in healthcare can be reached.</p>\",\"PeriodicalId\":16317,\"journal\":{\"name\":\"Journal of Medical Ethics\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":3.4000,\"publicationDate\":\"2025-07-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Medical Ethics\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1136/jme-2025-111023\",\"RegionNum\":2,\"RegionCategory\":\"哲学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ETHICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Ethics","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1136/jme-2025-111023","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ETHICS","Score":null,"Total":0}
Machine learning for mental health diagnosis: power and extensions of epistemic injustice.
Following the article of Ugar and Malele, Pozzi and De Proost provide a necessary addition to the discussion around machine learning (ML) in mental health diagnosis. However, their analysis of contributory injustice, a type of injustice in which the dominant social group does not recognise the epistemic offerings of the minority, provides only an introduction to the significant harm that can stem from the intersection of ML and culture. Not only does ML, untrained in certain cultural contexts, provide the broad dismissal of a group's values, but it also manifests in more direct, personal forms of harm. Testimonial injustice is a particularly troubling dimension of ML's integration into psychiatry, and this paper will examine how social power operates within these systems. First, it is necessary to understand ML as a social entity. Then, through an analysis of Fricker's account of social identity and an extension of this through Brey's framework of the social power of technology, a more complete appreciation of the epistemic injustice caused by ML algorithms in healthcare can be reached.
期刊介绍:
Journal of Medical Ethics is a leading international journal that reflects the whole field of medical ethics. The journal seeks to promote ethical reflection and conduct in scientific research and medical practice. It features articles on various ethical aspects of health care relevant to health care professionals, members of clinical ethics committees, medical ethics professionals, researchers and bioscientists, policy makers and patients.
Subscribers to the Journal of Medical Ethics also receive Medical Humanities journal at no extra cost.
JME is the official journal of the Institute of Medical Ethics.