Vijay Ravi , Jinhan Wang , Jonathan Flint , Abeer Alwan
{"title":"通过分离说话者提高基于语音的抑郁检测的准确性和私密性","authors":"Vijay Ravi , Jinhan Wang , Jonathan Flint , Abeer Alwan","doi":"10.1016/j.csl.2023.101605","DOIUrl":null,"url":null,"abstract":"<div><p>Speech signals are valuable biomarkers for assessing an individual’s mental health, including identifying Major Depressive Disorder (MDD) automatically. A frequently used approach in this regard is to employ features related to speaker identity, such as speaker-embeddings. However, over-reliance on speaker identity features in mental health screening systems can compromise patient privacy. Moreover, some aspects of speaker identity may not be relevant for depression detection and could serve as a bias factor that hampers system performance. To overcome these limitations, we propose disentangling speaker-identity information from depression-related information. Specifically, we present four distinct disentanglement methods to achieve this — adversarial speaker identification (SID)-loss maximization (ADV), SID-loss equalization with variance (LEV), SID-loss equalization using Cross-Entropy (LECE) and SID-loss equalization using KL divergence (LEKLD). Our experiments, which incorporated diverse input features and model architectures, have yielded improved F1 scores for MDD detection and voice-privacy attributes, as quantified by Gain in Voice Distinctiveness (<span><math><msub><mrow><mi>G</mi></mrow><mrow><mi>V</mi><mi>D</mi></mrow></msub></math></span>) and De-Identification Scores (DeID). On the DAIC-WOZ dataset (English), LECE using ComparE16 features results in the best F1-Scores of 80% which represents the audio-only SOTA depression detection F1-Score along with a <span><math><msub><mrow><mi>G</mi></mrow><mrow><mi>V</mi><mi>D</mi></mrow></msub></math></span> of −1.1 dB and a DeID of 85%. On the EATD dataset (Mandarin), ADV using raw-audio signal achieves an F1-Score of 72.38% surpassing multi-modal SOTA along with a <span><math><msub><mrow><mi>G</mi></mrow><mrow><mi>V</mi><mi>D</mi></mrow></msub></math></span> of −0.89 dB dB and a DeID of 51.21%. By reducing the dependence on speaker-identity-related features, our method offers a promising direction for speech-based depression detection that preserves patient privacy.</p></div>","PeriodicalId":50638,"journal":{"name":"Computer Speech and Language","volume":null,"pages":null},"PeriodicalIF":3.1000,"publicationDate":"2023-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0885230823001249/pdfft?md5=7acff7dbe3c70a9a6ae6cde978bd02e2&pid=1-s2.0-S0885230823001249-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Enhancing accuracy and privacy in speech-based depression detection through speaker disentanglement\",\"authors\":\"Vijay Ravi , Jinhan Wang , Jonathan Flint , Abeer Alwan\",\"doi\":\"10.1016/j.csl.2023.101605\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Speech signals are valuable biomarkers for assessing an individual’s mental health, including identifying Major Depressive Disorder (MDD) automatically. A frequently used approach in this regard is to employ features related to speaker identity, such as speaker-embeddings. However, over-reliance on speaker identity features in mental health screening systems can compromise patient privacy. Moreover, some aspects of speaker identity may not be relevant for depression detection and could serve as a bias factor that hampers system performance. To overcome these limitations, we propose disentangling speaker-identity information from depression-related information. Specifically, we present four distinct disentanglement methods to achieve this — adversarial speaker identification (SID)-loss maximization (ADV), SID-loss equalization with variance (LEV), SID-loss equalization using Cross-Entropy (LECE) and SID-loss equalization using KL divergence (LEKLD). Our experiments, which incorporated diverse input features and model architectures, have yielded improved F1 scores for MDD detection and voice-privacy attributes, as quantified by Gain in Voice Distinctiveness (<span><math><msub><mrow><mi>G</mi></mrow><mrow><mi>V</mi><mi>D</mi></mrow></msub></math></span>) and De-Identification Scores (DeID). On the DAIC-WOZ dataset (English), LECE using ComparE16 features results in the best F1-Scores of 80% which represents the audio-only SOTA depression detection F1-Score along with a <span><math><msub><mrow><mi>G</mi></mrow><mrow><mi>V</mi><mi>D</mi></mrow></msub></math></span> of −1.1 dB and a DeID of 85%. On the EATD dataset (Mandarin), ADV using raw-audio signal achieves an F1-Score of 72.38% surpassing multi-modal SOTA along with a <span><math><msub><mrow><mi>G</mi></mrow><mrow><mi>V</mi><mi>D</mi></mrow></msub></math></span> of −0.89 dB dB and a DeID of 51.21%. By reducing the dependence on speaker-identity-related features, our method offers a promising direction for speech-based depression detection that preserves patient privacy.</p></div>\",\"PeriodicalId\":50638,\"journal\":{\"name\":\"Computer Speech and Language\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2023-12-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S0885230823001249/pdfft?md5=7acff7dbe3c70a9a6ae6cde978bd02e2&pid=1-s2.0-S0885230823001249-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer Speech and Language\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0885230823001249\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Speech and Language","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0885230823001249","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
摘要
语音信号是评估个人心理健康的重要生物标记,包括自动识别重度抑郁症(MDD)。这方面常用的一种方法是采用与说话者身份相关的特征,如说话者嵌入。然而,在心理健康筛查系统中过度依赖说话者身份特征可能会损害病人的隐私。此外,说话者身份的某些方面可能与抑郁检测无关,可能成为影响系统性能的偏差因素。为了克服这些局限性,我们建议将说话者身份信息与抑郁相关信息分离开来。具体来说,我们提出了四种不同的解缠方法来实现这一目标--对抗性说话人识别(SID)-损失最大化(ADV)、带方差的 SID 损失均衡(LEV)、使用交叉熵的 SID 损失均衡(LECE)和使用 KL 分歧的 SID 损失均衡(LEKLD)。我们的实验采用了不同的输入特征和模型架构,提高了 MDD 检测和语音隐私属性的 F1 分数,并通过语音独特性增益(GVD)和去识别分数(DeID)进行量化。在 DAIC-WOZ 数据集(英语)上,使用 ComparE16 特征的 LECE 得到了 80% 的最佳 F1 分数,代表了纯音频 SOTA 抑郁症检测 F1 分数,同时 GVD 为 -1.1 dB,DeID 为 85%。在 EATD 数据集(普通话)上,使用原始音频信号的 ADV 的 F1 分数为 72.38%,超过了多模式 SOTA,GVD 为 -0.89 dB dB,DeID 为 51.21%。通过减少对说话者身份相关特征的依赖,我们的方法为基于语音的抑郁检测提供了一个保护患者隐私的前景广阔的方向。
Enhancing accuracy and privacy in speech-based depression detection through speaker disentanglement
Speech signals are valuable biomarkers for assessing an individual’s mental health, including identifying Major Depressive Disorder (MDD) automatically. A frequently used approach in this regard is to employ features related to speaker identity, such as speaker-embeddings. However, over-reliance on speaker identity features in mental health screening systems can compromise patient privacy. Moreover, some aspects of speaker identity may not be relevant for depression detection and could serve as a bias factor that hampers system performance. To overcome these limitations, we propose disentangling speaker-identity information from depression-related information. Specifically, we present four distinct disentanglement methods to achieve this — adversarial speaker identification (SID)-loss maximization (ADV), SID-loss equalization with variance (LEV), SID-loss equalization using Cross-Entropy (LECE) and SID-loss equalization using KL divergence (LEKLD). Our experiments, which incorporated diverse input features and model architectures, have yielded improved F1 scores for MDD detection and voice-privacy attributes, as quantified by Gain in Voice Distinctiveness () and De-Identification Scores (DeID). On the DAIC-WOZ dataset (English), LECE using ComparE16 features results in the best F1-Scores of 80% which represents the audio-only SOTA depression detection F1-Score along with a of −1.1 dB and a DeID of 85%. On the EATD dataset (Mandarin), ADV using raw-audio signal achieves an F1-Score of 72.38% surpassing multi-modal SOTA along with a of −0.89 dB dB and a DeID of 51.21%. By reducing the dependence on speaker-identity-related features, our method offers a promising direction for speech-based depression detection that preserves patient privacy.
期刊介绍:
Computer Speech & Language publishes reports of original research related to the recognition, understanding, production, coding and mining of speech and language.
The speech and language sciences have a long history, but it is only relatively recently that large-scale implementation of and experimentation with complex models of speech and language processing has become feasible. Such research is often carried out somewhat separately by practitioners of artificial intelligence, computer science, electronic engineering, information retrieval, linguistics, phonetics, or psychology.