Cautious optimism: public voices on medical AI and sociotechnical harm.

IF 3.2 Q1 HEALTH CARE SCIENCES & SERVICES
Frontiers in digital health Pub Date : 2025-09-23 eCollection Date: 2025-01-01 DOI:10.3389/fdgth.2025.1625747
Beverley A Townsend, Victoria J Hodge, Hannah Richardson, Radu Calinescu, T T Arvind
{"title":"Cautious optimism: public voices on medical AI and sociotechnical harm.","authors":"Beverley A Townsend, Victoria J Hodge, Hannah Richardson, Radu Calinescu, T T Arvind","doi":"10.3389/fdgth.2025.1625747","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Medical-purpose software and Artificial Intelligence (\"AI\")-enabled technologies (\"medical AI\") raise important social, ethical, cultural, and regulatory challenges. To elucidate these important challenges, we present the findings of a qualitative study undertaken to elicit public perspectives and expectations around medical AI adoption and related sociotechnical harm. Sociotechnical harm refers to any adverse implications including, but not limited to, physical, psychological, social, and cultural impacts experienced by a person or broader society as a result of medical AI adoption. The work is intended to guide effective policy interventions to address, prioritise, and mitigate such harm.</p><p><strong>Methods: </strong>Using a qualitative design approach, twenty interviews and/or long-form questionnaires were completed between September and November 2024 with UK participants to explore their perspectives, expectations, and concerns around medical AI adoption and related sociotechnical harm. An emphasis was placed on diversity and inclusion, with study participants drawn from racially, ethnically, and linguistically diverse groups and from self-identified minority groups. A thematic analysis of interview transcripts and questionnaire responses was conducted to identify general medical AI perception and sociotechnical harm.</p><p><strong>Results: </strong>Our findings demonstrate that while participants are cautiously optimistic about medical AI adoption, all participants expressed concern about matters related to sociotechnical harm. This included potential harm to human autonomy, alienation and a reduction in standards of care, the lack of value alignment and integration, epistemic injustice, bias and discrimination, and issues around access and equity, explainability and transparency, and data privacy and data-related harm. While responsibility was seen to be shared, participants located responsibility for addressing sociotechnical harm primarily with the regulatory authorities. An identified concern was risk of exclusion and inequitable access on account of practical barriers such as physical limitations, technical competency, language barriers, or financial constraints.</p><p><strong>Conclusion: </strong>We conclude that medical AI adoption can be better supported through identifying, prioritising, and addressing sociotechnical harm including the development of clear impact and mitigation practices, embedding pro-social values within the system, and through effective policy guidance intervention.</p>","PeriodicalId":73078,"journal":{"name":"Frontiers in digital health","volume":"7 ","pages":"1625747"},"PeriodicalIF":3.2000,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12500676/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in digital health","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/fdgth.2025.1625747","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Medical-purpose software and Artificial Intelligence ("AI")-enabled technologies ("medical AI") raise important social, ethical, cultural, and regulatory challenges. To elucidate these important challenges, we present the findings of a qualitative study undertaken to elicit public perspectives and expectations around medical AI adoption and related sociotechnical harm. Sociotechnical harm refers to any adverse implications including, but not limited to, physical, psychological, social, and cultural impacts experienced by a person or broader society as a result of medical AI adoption. The work is intended to guide effective policy interventions to address, prioritise, and mitigate such harm.

Methods: Using a qualitative design approach, twenty interviews and/or long-form questionnaires were completed between September and November 2024 with UK participants to explore their perspectives, expectations, and concerns around medical AI adoption and related sociotechnical harm. An emphasis was placed on diversity and inclusion, with study participants drawn from racially, ethnically, and linguistically diverse groups and from self-identified minority groups. A thematic analysis of interview transcripts and questionnaire responses was conducted to identify general medical AI perception and sociotechnical harm.

Results: Our findings demonstrate that while participants are cautiously optimistic about medical AI adoption, all participants expressed concern about matters related to sociotechnical harm. This included potential harm to human autonomy, alienation and a reduction in standards of care, the lack of value alignment and integration, epistemic injustice, bias and discrimination, and issues around access and equity, explainability and transparency, and data privacy and data-related harm. While responsibility was seen to be shared, participants located responsibility for addressing sociotechnical harm primarily with the regulatory authorities. An identified concern was risk of exclusion and inequitable access on account of practical barriers such as physical limitations, technical competency, language barriers, or financial constraints.

Conclusion: We conclude that medical AI adoption can be better supported through identifying, prioritising, and addressing sociotechnical harm including the development of clear impact and mitigation practices, embedding pro-social values within the system, and through effective policy guidance intervention.

谨慎乐观:公众对医疗人工智能和社会技术危害的声音。
背景:医疗用途软件和人工智能(“AI”)技术(“医疗AI”)提出了重要的社会、伦理、文化和监管挑战。为了阐明这些重要的挑战,我们提出了一项定性研究的结果,该研究旨在引出公众对医疗人工智能采用及其相关社会技术危害的看法和期望。社会技术危害是指任何不利影响,包括但不限于,由于医疗人工智能的采用,个人或更广泛的社会经历的身体、心理、社会和文化影响。这项工作旨在指导有效的政策干预措施,以解决、优先考虑和减轻此类危害。方法:采用定性设计方法,在2024年9月至11月期间与英国参与者完成了20次访谈和/或长篇问卷调查,以探讨他们对医疗人工智能采用及其相关社会技术危害的观点、期望和担忧。研究的重点是多样性和包容性,研究参与者来自种族、民族和语言多样化的群体,以及自认为是少数民族的群体。对访谈记录和问卷回答进行了专题分析,以确定一般的医疗人工智能感知和社会技术危害。结果:我们的研究结果表明,虽然参与者对医疗人工智能的采用持谨慎乐观态度,但所有参与者都表达了对社会技术危害相关问题的担忧。这包括对人类自主性的潜在危害、疏离感和护理标准的降低、缺乏价值一致性和整合、认识上的不公正、偏见和歧视,以及有关获取和公平、可解释性和透明度、数据隐私和数据相关危害的问题。虽然责任被认为是共同的,但参与者认为解决社会技术危害的责任主要是由监管当局承担的。一项确定的关切是,由于身体限制、技术能力、语言障碍或财政限制等实际障碍,有被排斥和不公平使用的风险。结论:我们得出的结论是,通过识别、优先考虑和解决社会技术危害,包括制定明确的影响和缓解做法,在系统中嵌入亲社会价值观,以及通过有效的政策指导干预,可以更好地支持医疗人工智能的采用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
4.20
自引率
0.00%
发文量
0
审稿时长
13 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信