AI to renew public employment services? Explanation and trust of domain experts

Thomas Souverain
{"title":"AI to renew public employment services? Explanation and trust of domain experts","authors":"Thomas Souverain","doi":"10.1007/s43681-024-00629-w","DOIUrl":null,"url":null,"abstract":"<div><p>It is often assumed in explainable AI (XAI) literature that explaining AI predictions will enhance trust of users. To bridge this research gap, we explored trust in XAI on public policies. The French Employment Agency deploys neural networks since 2021 to help job counsellors reject the illegal employment offers. Digging into that case, we adopted philosophical lens on trust in AI which is also compatible with measurements, on demonstrated and perceived trust. We performed a three-months experimental study, joining sociological and psychological methods: Qualitative (S1): Relying on sociological field work methods, we conducted 1 h semi-structured interviews with job counsellors. On 5 regional agencies, we asked 18 counsellors to describe their work practices with AI warnings. Quantitative (S2): Having gathered agents' perceptions, we quantified the reasons to trust AI. We administered a questionnaire, comparing three homogeneous cohorts of 100 counsellors each with different information on AI. We tested the impact of two local XAI, general rule and counterfactual rewording. Our survey provided empirical evidence for the link between XAI and trust, but it also stressed that XAI supports differently appeal to rationality. The rule helps advisors to be sure that criteria motivating AI predictions comply with the law, whereas counterfactual raises doubts on the offer’s quality. Whereas XAI enhanced both demonstrated and perceived trust, our study also revealed limits to full adoption, based on profiles of experts. XAI could more efficiently trigger trust, but only when addressing personal beliefs, or rearranging work conditions to let experts the time to understand AI.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 1","pages":"55 - 70"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-024-00629-w","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

It is often assumed in explainable AI (XAI) literature that explaining AI predictions will enhance trust of users. To bridge this research gap, we explored trust in XAI on public policies. The French Employment Agency deploys neural networks since 2021 to help job counsellors reject the illegal employment offers. Digging into that case, we adopted philosophical lens on trust in AI which is also compatible with measurements, on demonstrated and perceived trust. We performed a three-months experimental study, joining sociological and psychological methods: Qualitative (S1): Relying on sociological field work methods, we conducted 1 h semi-structured interviews with job counsellors. On 5 regional agencies, we asked 18 counsellors to describe their work practices with AI warnings. Quantitative (S2): Having gathered agents' perceptions, we quantified the reasons to trust AI. We administered a questionnaire, comparing three homogeneous cohorts of 100 counsellors each with different information on AI. We tested the impact of two local XAI, general rule and counterfactual rewording. Our survey provided empirical evidence for the link between XAI and trust, but it also stressed that XAI supports differently appeal to rationality. The rule helps advisors to be sure that criteria motivating AI predictions comply with the law, whereas counterfactual raises doubts on the offer’s quality. Whereas XAI enhanced both demonstrated and perceived trust, our study also revealed limits to full adoption, based on profiles of experts. XAI could more efficiently trigger trust, but only when addressing personal beliefs, or rearranging work conditions to let experts the time to understand AI.

人工智能更新公共就业服务?领域专家的解释和信任
在可解释的人工智能(XAI)文献中,通常假设解释人工智能预测将增强用户的信任。为了弥补这一研究差距,我们探讨了在公共政策上对XAI的信任。法国职业介绍所从2021年开始部署神经网络,以帮助职业顾问拒绝非法就业机会。深入研究这个案例,我们采用了人工智能信任的哲学视角,这也与测量、演示和感知信任相兼容。我们进行了为期三个月的实验研究,结合了社会学和心理学的方法:定性(S1):依靠社会学的实地工作方法,我们与就业顾问进行了1小时的半结构化访谈。在5个地区机构中,我们要求18名顾问用人工智能警告来描述他们的工作实践。定量(S2):在收集了代理的感知之后,我们量化了信任AI的原因。我们进行了一项问卷调查,比较了三个同质队列,每组100名咨询师对人工智能的不同信息。我们测试了两种局部XAI,一般规则和反事实改写的影响。我们的调查为XAI与信任之间的联系提供了经验证据,但它也强调了XAI支持不同的理性诉求。该规则有助于顾问确保推动人工智能预测的标准符合法律,而反事实则会让人对报价的质量产生怀疑。尽管XAI增强了展示的信任和感知的信任,但我们的研究也揭示了基于专家资料的全面采用的限制。人工智能可以更有效地激发信任,但前提是要解决个人信仰问题,或者重新安排工作条件,让专家有时间了解人工智能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信