Multi-Task Learning Based End-to-End Speaker Recognition

Yuxuan Pan, Weiqiang Zhang
{"title":"Multi-Task Learning Based End-to-End Speaker Recognition","authors":"Yuxuan Pan, Weiqiang Zhang","doi":"10.1145/3372806.3372818","DOIUrl":null,"url":null,"abstract":"Recently, there has been an increasing interest in end-to-end speaker recognition that directly take raw speech waveform as input without any hand-crafted features such as FBANK and MFCC. SincNet is a recently developed novel convolutional neural network (CNN) architecture in which the filters in the first convolutional layer are set to band-pass filters (sinc functions). Experiments show that SincNet achieves a significant decrease in frame error rate (FER) than traditional CNNs and DNNs.\n In this paper we demonstrate how to improve the performance of SincNet using Multi-Task learning (MTL). In the proposed Sinc- Net architecture, besides the main task (speaker recognition), a phoneme recognition task is employed as an auxiliary task. The network uses sinc layers and convolutional layers as shared layers to improve the extensiveness of the network, and the outputs of shared layers are fed into two different sets of full-connected layers for classification. Our experiments, conducted on TIMIT corpora, show that the proposed architecture SincNet-MTL performs better than standard SincNet architecture in both classification error rates (CER) and convergence rate.","PeriodicalId":340004,"journal":{"name":"International Conference on Signal Processing and Machine Learning","volume":"90 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Conference on Signal Processing and Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3372806.3372818","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Recently, there has been an increasing interest in end-to-end speaker recognition that directly take raw speech waveform as input without any hand-crafted features such as FBANK and MFCC. SincNet is a recently developed novel convolutional neural network (CNN) architecture in which the filters in the first convolutional layer are set to band-pass filters (sinc functions). Experiments show that SincNet achieves a significant decrease in frame error rate (FER) than traditional CNNs and DNNs. In this paper we demonstrate how to improve the performance of SincNet using Multi-Task learning (MTL). In the proposed Sinc- Net architecture, besides the main task (speaker recognition), a phoneme recognition task is employed as an auxiliary task. The network uses sinc layers and convolutional layers as shared layers to improve the extensiveness of the network, and the outputs of shared layers are fed into two different sets of full-connected layers for classification. Our experiments, conducted on TIMIT corpora, show that the proposed architecture SincNet-MTL performs better than standard SincNet architecture in both classification error rates (CER) and convergence rate.
基于端到端说话人识别的多任务学习
最近,人们对端到端的说话人识别越来越感兴趣,这种识别直接将原始语音波形作为输入,而不需要任何手工制作的功能,如FBANK和MFCC。SincNet是最近开发的一种新型卷积神经网络(CNN)架构,其中第一卷积层中的滤波器被设置为带通滤波器(sinc函数)。实验表明,与传统的cnn和dnn相比,SincNet的帧错误率(FER)显著降低。在本文中,我们演示了如何使用多任务学习(MTL)来提高SincNet的性能。在该体系结构中,除了主任务(说话人识别)外,还采用了一个音素识别任务作为辅助任务。该网络使用自卷积层和卷积层作为共享层来提高网络的广泛性,共享层的输出被馈送到两组不同的全连接层中进行分类。我们在TIMIT语料库上进行的实验表明,所提出的SincNet- mtl架构在分类错误率(CER)和收敛率方面都优于标准SincNet架构。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信