Khaled Khelif, Yann Mombrun, G. Backfried, Farhan Sahito, L. Scarpato, P. Motlícek, S. Madikeri, Damien Kelly, Gideon Hazzani, Emmanouil Chatzigavriil
{"title":"Towards a Breakthrough Speaker Identification Approach for Law Enforcement Agencies: SIIP","authors":"Khaled Khelif, Yann Mombrun, G. Backfried, Farhan Sahito, L. Scarpato, P. Motlícek, S. Madikeri, Damien Kelly, Gideon Hazzani, Emmanouil Chatzigavriil","doi":"10.1109/EISIC.2017.14","DOIUrl":null,"url":null,"abstract":"This paper describes SIIP (Speaker Identification Integrated Project) a high performance innovative and sustainable Speaker Identification (SID) solution, running over large voice samples database. The solution is based on development, integration and fusion of a series of speech analytic algorithms which includes speaker model recognition, gender identification, age identification, language and accent identification, keyword and taxonomy spotting. A full integrated system is proposed ensuring multisource data management, advanced voice analysis, information sharing and efficient and consistent man-machine interactions.","PeriodicalId":436947,"journal":{"name":"2017 European Intelligence and Security Informatics Conference (EISIC)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 European Intelligence and Security Informatics Conference (EISIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/EISIC.2017.14","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
This paper describes SIIP (Speaker Identification Integrated Project) a high performance innovative and sustainable Speaker Identification (SID) solution, running over large voice samples database. The solution is based on development, integration and fusion of a series of speech analytic algorithms which includes speaker model recognition, gender identification, age identification, language and accent identification, keyword and taxonomy spotting. A full integrated system is proposed ensuring multisource data management, advanced voice analysis, information sharing and efficient and consistent man-machine interactions.