“Eh? Aye!”: Categorisation bias for natural human vs AI-augmented voices is influenced by dialect

Neil W. Kirk
{"title":"“Eh? Aye!”: Categorisation bias for natural human vs AI-augmented voices is influenced by dialect","authors":"Neil W. Kirk","doi":"10.1016/j.chbah.2025.100153","DOIUrl":null,"url":null,"abstract":"<div><div>Advances in AI-assisted voice technology have made it easier to clone or disguise voices, creating a wide range of synthetic voices using different accents, dialects, and languages. While these developments offer positive applications, they also pose risks for misuse. This raises the question as to whether listeners can reliably distinguish between human and AI-enhanced speech and whether prior experiences and expectations about language varieties that are traditionally less-represented by technology affect this ability. Two experiments were conducted to investigate listeners’ ability to categorise voices as human or AI-enhanced in both a standard and a regional Scottish dialect. Using a Signal Detection Theory framework, both experiments explored participants' sensitivity and categorisation biases. In Experiment 1 (<em>N</em> = 100), a predominantly Scottish sample showed above-chance performance in distinguishing between human and AI-enhanced voices, but there was no significant effect of dialect on sensitivity. However, listeners exhibited a bias toward categorising voices as “human”, which was concentrated within the regional Dundonian Scots dialect. In Experiment 2 (<em>N</em> = 100) participants from southern and eastern England, demonstrated reduced overall sensitivity and a <em>Human Categorisation Bias</em> that was more evenly spread across the two dialects. These findings have implications for the growing use of AI-assisted voice technology in linguistically diverse contexts, highlighting both the potential for enhanced representation of Minority, Indigenous, Non-standard and Dialect (MIND) varieties, and the risks of AI misuse.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100153"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882125000374","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Advances in AI-assisted voice technology have made it easier to clone or disguise voices, creating a wide range of synthetic voices using different accents, dialects, and languages. While these developments offer positive applications, they also pose risks for misuse. This raises the question as to whether listeners can reliably distinguish between human and AI-enhanced speech and whether prior experiences and expectations about language varieties that are traditionally less-represented by technology affect this ability. Two experiments were conducted to investigate listeners’ ability to categorise voices as human or AI-enhanced in both a standard and a regional Scottish dialect. Using a Signal Detection Theory framework, both experiments explored participants' sensitivity and categorisation biases. In Experiment 1 (N = 100), a predominantly Scottish sample showed above-chance performance in distinguishing between human and AI-enhanced voices, but there was no significant effect of dialect on sensitivity. However, listeners exhibited a bias toward categorising voices as “human”, which was concentrated within the regional Dundonian Scots dialect. In Experiment 2 (N = 100) participants from southern and eastern England, demonstrated reduced overall sensitivity and a Human Categorisation Bias that was more evenly spread across the two dialects. These findings have implications for the growing use of AI-assisted voice technology in linguistically diverse contexts, highlighting both the potential for enhanced representation of Minority, Indigenous, Non-standard and Dialect (MIND) varieties, and the risks of AI misuse.
“嗯?啊!:自然人类声音与人工智能增强声音的分类偏差受到方言的影响
人工智能辅助语音技术的进步使得克隆或伪装声音变得更加容易,使用不同的口音、方言和语言创造出各种各样的合成声音。虽然这些发展提供了积极的应用,但它们也带来了滥用的风险。这就提出了一个问题,即听众是否能够可靠地区分人类和人工智能增强的语音,以及对传统上较少被技术代表的语言品种的先前经验和期望是否会影响这种能力。研究人员进行了两项实验,以调查听众在标准苏格兰方言和地区苏格兰方言中区分人类声音和人工智能增强声音的能力。使用信号检测理论框架,两个实验都探讨了参与者的敏感性和分类偏差。在实验1 (N = 100)中,以苏格兰人为主的样本在区分人类和人工智能增强的声音方面表现出高于机会的表现,但方言对灵敏度没有显著影响。然而,听众表现出一种将声音归类为“人类”的偏见,这种偏见主要集中在邓顿尼亚苏格兰方言中。在实验2 (N = 100)中,来自英格兰南部和东部的参与者表现出较低的总体敏感性和人类分类偏见,这种偏见在两种方言中更为均匀地分布。这些发现对人工智能辅助语音技术在语言多样化背景下的日益使用具有重要意义,突出了少数民族、土著、非标准和方言(MIND)变体的增强代表性的潜力,以及人工智能滥用的风险。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信