{"title":"Self-supervised Multimodal Speech Representations for the Assessment of Schizophrenia Symptoms","authors":"Gowtham Premananth, Carol Espy-Wilson","doi":"arxiv-2409.09733","DOIUrl":null,"url":null,"abstract":"Multimodal schizophrenia assessment systems have gained traction over the\nlast few years. This work introduces a schizophrenia assessment system to\ndiscern between prominent symptom classes of schizophrenia and predict an\noverall schizophrenia severity score. We develop a Vector Quantized Variational\nAuto-Encoder (VQ-VAE) based Multimodal Representation Learning (MRL) model to\nproduce task-agnostic speech representations from vocal Tract Variables (TVs)\nand Facial Action Units (FAUs). These representations are then used in a\nMulti-Task Learning (MTL) based downstream prediction model to obtain class\nlabels and an overall severity score. The proposed framework outperforms the\nprevious works on the multi-class classification task across all evaluation\nmetrics (Weighted F1 score, AUC-ROC score, and Weighted Accuracy).\nAdditionally, it estimates the schizophrenia severity score, a task not\naddressed by earlier approaches.","PeriodicalId":501034,"journal":{"name":"arXiv - EE - Signal Processing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.09733","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Multimodal schizophrenia assessment systems have gained traction over the
last few years. This work introduces a schizophrenia assessment system to
discern between prominent symptom classes of schizophrenia and predict an
overall schizophrenia severity score. We develop a Vector Quantized Variational
Auto-Encoder (VQ-VAE) based Multimodal Representation Learning (MRL) model to
produce task-agnostic speech representations from vocal Tract Variables (TVs)
and Facial Action Units (FAUs). These representations are then used in a
Multi-Task Learning (MTL) based downstream prediction model to obtain class
labels and an overall severity score. The proposed framework outperforms the
previous works on the multi-class classification task across all evaluation
metrics (Weighted F1 score, AUC-ROC score, and Weighted Accuracy).
Additionally, it estimates the schizophrenia severity score, a task not
addressed by earlier approaches.
多模态精神分裂症评估系统在过去几年中得到了广泛应用。这项研究介绍了一种精神分裂症评估系统,用于区分精神分裂症的主要症状类别,并预测精神分裂症的总体严重程度。我们开发了一种基于多模态表征学习(MRL)模型的矢量量化变异自动编码器(VQ-VAE),可从声道变量(TVs)和面部动作单元(FAUs)中生成与任务无关的语音表征。然后将这些表征用于基于多任务学习(MTL)的下游预测模型,以获得类别标签和总体严重程度评分。在多类分类任务的所有评价指标(加权 F1 分数、AUC-ROC 分数和加权准确率)上,所提出的框架都优于之前的研究成果。