{"title":"A Layer-Anchoring Strategy for Enhancing Cross-Lingual Speech Emotion Recognition","authors":"Shreya G. Upadhyay, Carlos Busso, Chi-Chun Lee","doi":"arxiv-2407.04966","DOIUrl":null,"url":null,"abstract":"Cross-lingual speech emotion recognition (SER) is important for a wide range\nof everyday applications. While recent SER research relies heavily on large\npretrained models for emotion training, existing studies often concentrate\nsolely on the final transformer layer of these models. However, given the\ntask-specific nature and hierarchical architecture of these models, each\ntransformer layer encapsulates different levels of information. Leveraging this\nhierarchical structure, our study focuses on the information embedded across\ndifferent layers. Through an examination of layer feature similarity across\ndifferent languages, we propose a novel strategy called a layer-anchoring\nmechanism to facilitate emotion transfer in cross-lingual SER tasks. Our\napproach is evaluated using two distinct language affective corpora\n(MSP-Podcast and BIIC-Podcast), achieving a best UAR performance of 60.21% on\nthe BIIC-podcast corpus. The analysis uncovers interesting insights into the\nbehavior of popular pretrained models.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Sound","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.04966","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Cross-lingual speech emotion recognition (SER) is important for a wide range
of everyday applications. While recent SER research relies heavily on large
pretrained models for emotion training, existing studies often concentrate
solely on the final transformer layer of these models. However, given the
task-specific nature and hierarchical architecture of these models, each
transformer layer encapsulates different levels of information. Leveraging this
hierarchical structure, our study focuses on the information embedded across
different layers. Through an examination of layer feature similarity across
different languages, we propose a novel strategy called a layer-anchoring
mechanism to facilitate emotion transfer in cross-lingual SER tasks. Our
approach is evaluated using two distinct language affective corpora
(MSP-Podcast and BIIC-Podcast), achieving a best UAR performance of 60.21% on
the BIIC-podcast corpus. The analysis uncovers interesting insights into the
behavior of popular pretrained models.