{"title":"“Eh? Aye!”: Categorisation bias for natural human vs AI-augmented voices is influenced by dialect","authors":"Neil W. Kirk","doi":"10.1016/j.chbah.2025.100153","DOIUrl":null,"url":null,"abstract":"<div><div>Advances in AI-assisted voice technology have made it easier to clone or disguise voices, creating a wide range of synthetic voices using different accents, dialects, and languages. While these developments offer positive applications, they also pose risks for misuse. This raises the question as to whether listeners can reliably distinguish between human and AI-enhanced speech and whether prior experiences and expectations about language varieties that are traditionally less-represented by technology affect this ability. Two experiments were conducted to investigate listeners’ ability to categorise voices as human or AI-enhanced in both a standard and a regional Scottish dialect. Using a Signal Detection Theory framework, both experiments explored participants' sensitivity and categorisation biases. In Experiment 1 (<em>N</em> = 100), a predominantly Scottish sample showed above-chance performance in distinguishing between human and AI-enhanced voices, but there was no significant effect of dialect on sensitivity. However, listeners exhibited a bias toward categorising voices as “human”, which was concentrated within the regional Dundonian Scots dialect. In Experiment 2 (<em>N</em> = 100) participants from southern and eastern England, demonstrated reduced overall sensitivity and a <em>Human Categorisation Bias</em> that was more evenly spread across the two dialects. These findings have implications for the growing use of AI-assisted voice technology in linguistically diverse contexts, highlighting both the potential for enhanced representation of Minority, Indigenous, Non-standard and Dialect (MIND) varieties, and the risks of AI misuse.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100153"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882125000374","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Advances in AI-assisted voice technology have made it easier to clone or disguise voices, creating a wide range of synthetic voices using different accents, dialects, and languages. While these developments offer positive applications, they also pose risks for misuse. This raises the question as to whether listeners can reliably distinguish between human and AI-enhanced speech and whether prior experiences and expectations about language varieties that are traditionally less-represented by technology affect this ability. Two experiments were conducted to investigate listeners’ ability to categorise voices as human or AI-enhanced in both a standard and a regional Scottish dialect. Using a Signal Detection Theory framework, both experiments explored participants' sensitivity and categorisation biases. In Experiment 1 (N = 100), a predominantly Scottish sample showed above-chance performance in distinguishing between human and AI-enhanced voices, but there was no significant effect of dialect on sensitivity. However, listeners exhibited a bias toward categorising voices as “human”, which was concentrated within the regional Dundonian Scots dialect. In Experiment 2 (N = 100) participants from southern and eastern England, demonstrated reduced overall sensitivity and a Human Categorisation Bias that was more evenly spread across the two dialects. These findings have implications for the growing use of AI-assisted voice technology in linguistically diverse contexts, highlighting both the potential for enhanced representation of Minority, Indigenous, Non-standard and Dialect (MIND) varieties, and the risks of AI misuse.