使用自监督预训练学习表征的语音中毒检测

Q2 Health Professions
Abigail Albuquerque, Samuel Chibuoyim Uche, Emmanuel Agu
{"title":"使用自监督预训练学习表征的语音中毒检测","authors":"Abigail Albuquerque,&nbsp;Samuel Chibuoyim Uche,&nbsp;Emmanuel Agu","doi":"10.1016/j.smhl.2025.100562","DOIUrl":null,"url":null,"abstract":"<div><div>Alcohol intoxication is one of the leading causes of death around the globe. Existing approaches to prevent Driving Under the Influence (DUI) are expensive, intrusive, or require external apparatus such as breathalyzers, which the drinker may not possess. Speech is a viable modality for detecting intoxication from changes in vocal patterns. Intoxicated speech is slower, has lower amplitude, and is more prone to errors at the sentence, word, and phonological levels than sober speech. However, intoxication detection from speech is challenging due to high inter- and intra-user variability and the confounding effects of other factors such as fatigue, which may also impair speech. This paper investigates Wav2Vec 2.0, a self-supervised neural network architecture, for intoxication classification from audio. Wav2Vec 2.0 is a Transformer-based model that has demonstrated remarkable performance in various speech-related tasks. It analyzes raw audio directly by applying a multi-head attention mechanism to latent audio representations and was pre-trained on the Librispeech, Libri-Light and EmoDB datasets. The proposed model achieved an unweighted average recall of 73.3%, outperforming state-of-the-art models, highlighting its potential for accurate DUI detection to prevent alcohol-related incidents.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"36 ","pages":"Article 100562"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Intoxication detection from speech using representations learned from self-supervised pre-training\",\"authors\":\"Abigail Albuquerque,&nbsp;Samuel Chibuoyim Uche,&nbsp;Emmanuel Agu\",\"doi\":\"10.1016/j.smhl.2025.100562\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Alcohol intoxication is one of the leading causes of death around the globe. Existing approaches to prevent Driving Under the Influence (DUI) are expensive, intrusive, or require external apparatus such as breathalyzers, which the drinker may not possess. Speech is a viable modality for detecting intoxication from changes in vocal patterns. Intoxicated speech is slower, has lower amplitude, and is more prone to errors at the sentence, word, and phonological levels than sober speech. However, intoxication detection from speech is challenging due to high inter- and intra-user variability and the confounding effects of other factors such as fatigue, which may also impair speech. This paper investigates Wav2Vec 2.0, a self-supervised neural network architecture, for intoxication classification from audio. Wav2Vec 2.0 is a Transformer-based model that has demonstrated remarkable performance in various speech-related tasks. It analyzes raw audio directly by applying a multi-head attention mechanism to latent audio representations and was pre-trained on the Librispeech, Libri-Light and EmoDB datasets. The proposed model achieved an unweighted average recall of 73.3%, outperforming state-of-the-art models, highlighting its potential for accurate DUI detection to prevent alcohol-related incidents.</div></div>\",\"PeriodicalId\":37151,\"journal\":{\"name\":\"Smart Health\",\"volume\":\"36 \",\"pages\":\"Article 100562\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-03-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Smart Health\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2352648325000236\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"Health Professions\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Smart Health","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2352648325000236","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Health Professions","Score":null,"Total":0}
引用次数: 0

摘要

酒精中毒是全球死亡的主要原因之一。现有的防止酒后驾驶的方法要么价格昂贵,要么具有侵入性,要么需要外部设备,比如酒精测试仪,而饮酒者可能没有这些设备。语音是通过声音模式的变化来检测中毒的一种可行的方式。醉酒时说话的速度较慢,幅度较低,并且比清醒时说话更容易在句子、单词和语音层面上出错。然而,由于使用者之间和内部的高度可变性以及疲劳等其他因素的混杂影响,从语音中检测中毒是具有挑战性的,这些因素也可能损害语音。本文研究了一种自监督神经网络结构——Wav2Vec 2.0,用于音频中毒分类。Wav2Vec 2.0是一个基于transformer的模型,在各种与语音相关的任务中表现出色。它通过对潜在音频表示应用多头注意机制直接分析原始音频,并在librisspeech, lib - light和EmoDB数据集上进行预训练。该模型实现了73.3%的非加权平均召回率,优于最先进的模型,突出了其准确检测DUI以防止酒精相关事故的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Intoxication detection from speech using representations learned from self-supervised pre-training
Alcohol intoxication is one of the leading causes of death around the globe. Existing approaches to prevent Driving Under the Influence (DUI) are expensive, intrusive, or require external apparatus such as breathalyzers, which the drinker may not possess. Speech is a viable modality for detecting intoxication from changes in vocal patterns. Intoxicated speech is slower, has lower amplitude, and is more prone to errors at the sentence, word, and phonological levels than sober speech. However, intoxication detection from speech is challenging due to high inter- and intra-user variability and the confounding effects of other factors such as fatigue, which may also impair speech. This paper investigates Wav2Vec 2.0, a self-supervised neural network architecture, for intoxication classification from audio. Wav2Vec 2.0 is a Transformer-based model that has demonstrated remarkable performance in various speech-related tasks. It analyzes raw audio directly by applying a multi-head attention mechanism to latent audio representations and was pre-trained on the Librispeech, Libri-Light and EmoDB datasets. The proposed model achieved an unweighted average recall of 73.3%, outperforming state-of-the-art models, highlighting its potential for accurate DUI detection to prevent alcohol-related incidents.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Smart Health
Smart Health Computer Science-Computer Science Applications
CiteScore
6.50
自引率
0.00%
发文量
81
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信