Weiliang Zheng, Zhenxiang Chen, Yang Li, Xiaoqing Jiang, Xueyang Cao
{"title":"基于对比学习的柔性喉式传声器说话人识别系统","authors":"Weiliang Zheng, Zhenxiang Chen, Yang Li, Xiaoqing Jiang, Xueyang Cao","doi":"10.1109/CCGrid57682.2023.00065","DOIUrl":null,"url":null,"abstract":"Recently, Flexible pressure sensor-based Throat Microphones (FTM) have attracted more attention in noise-robust speaker recognition and are promising for helping people with specific dysarthria to complete speaker recognition. FTM has outstanding flexibility compared with Hard Throat Microphones (HTM) and noise-robustness compared with Close-talk microphones (CM). However, speaker recognition for FTM is still an open task awaiting exploration since FTM has degradation problems and a lack of data sets. To tackle these two obstacles, referring to feature mapping methods for HTM, we introduce an FTM-oriented supervised contrastive learning (FTMSCL) method. An FTM speech data set is collected, then a contrastive loss function is designed to avoid the feature mapping methods' problems and effectively leverage label information from this data set. Furthermore, a critical parameter margin in this loss and several data augmentations for FTM are investigated. Experimental results show that, with no need for CM data, FTMSCL can achieve a False Acceptance Rate (FAR) of 2.97% and a False Rejection Rate (FRR) of 2.83%, which outperforms a conventional End-to-End one and an advanced feature mapping one significantly. Moreover, the best FAR and FRR of our FTMSCL method are only 0.86% and 0.83% higher than the best one using clean CM data.","PeriodicalId":363806,"journal":{"name":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Speaker recognition system of flexible throat microphone using contrastive learning\",\"authors\":\"Weiliang Zheng, Zhenxiang Chen, Yang Li, Xiaoqing Jiang, Xueyang Cao\",\"doi\":\"10.1109/CCGrid57682.2023.00065\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recently, Flexible pressure sensor-based Throat Microphones (FTM) have attracted more attention in noise-robust speaker recognition and are promising for helping people with specific dysarthria to complete speaker recognition. FTM has outstanding flexibility compared with Hard Throat Microphones (HTM) and noise-robustness compared with Close-talk microphones (CM). However, speaker recognition for FTM is still an open task awaiting exploration since FTM has degradation problems and a lack of data sets. To tackle these two obstacles, referring to feature mapping methods for HTM, we introduce an FTM-oriented supervised contrastive learning (FTMSCL) method. An FTM speech data set is collected, then a contrastive loss function is designed to avoid the feature mapping methods' problems and effectively leverage label information from this data set. Furthermore, a critical parameter margin in this loss and several data augmentations for FTM are investigated. Experimental results show that, with no need for CM data, FTMSCL can achieve a False Acceptance Rate (FAR) of 2.97% and a False Rejection Rate (FRR) of 2.83%, which outperforms a conventional End-to-End one and an advanced feature mapping one significantly. Moreover, the best FAR and FRR of our FTMSCL method are only 0.86% and 0.83% higher than the best one using clean CM data.\",\"PeriodicalId\":363806,\"journal\":{\"name\":\"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)\",\"volume\":\"16 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CCGrid57682.2023.00065\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE/ACM 23rd International Symposium on Cluster, Cloud and Internet Computing (CCGrid)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCGrid57682.2023.00065","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Speaker recognition system of flexible throat microphone using contrastive learning
Recently, Flexible pressure sensor-based Throat Microphones (FTM) have attracted more attention in noise-robust speaker recognition and are promising for helping people with specific dysarthria to complete speaker recognition. FTM has outstanding flexibility compared with Hard Throat Microphones (HTM) and noise-robustness compared with Close-talk microphones (CM). However, speaker recognition for FTM is still an open task awaiting exploration since FTM has degradation problems and a lack of data sets. To tackle these two obstacles, referring to feature mapping methods for HTM, we introduce an FTM-oriented supervised contrastive learning (FTMSCL) method. An FTM speech data set is collected, then a contrastive loss function is designed to avoid the feature mapping methods' problems and effectively leverage label information from this data set. Furthermore, a critical parameter margin in this loss and several data augmentations for FTM are investigated. Experimental results show that, with no need for CM data, FTMSCL can achieve a False Acceptance Rate (FAR) of 2.97% and a False Rejection Rate (FRR) of 2.83%, which outperforms a conventional End-to-End one and an advanced feature mapping one significantly. Moreover, the best FAR and FRR of our FTMSCL method are only 0.86% and 0.83% higher than the best one using clean CM data.