Danny Bastian Manurung, B. Dirgantoro, C. Setianingsih
{"title":"Speaker Recognition For Digital Forensic Audio Analysis Using Learning Vector Quantization Method","authors":"Danny Bastian Manurung, B. Dirgantoro, C. Setianingsih","doi":"10.1109/IOTAIS.2018.8600852","DOIUrl":null,"url":null,"abstract":"Presently, Biometric features are often used to identify suspects in law enforcement processes. One of these biometric features is Speaker Recognition. Speaker recognition is used to discriminate people by their voice. In this study, the problem that can be solved is how to classify audio sample that exist on the evidence with the voice of the suspect.In this final project is made a application’s prototype that can be used to classify and in that case will be done speaker recognition technique (Speaker Recognition) to be able to classify the speaker’s voice in the evidence and the voice of the suspect. The stages used to compare the sound is by extracting the sound features using the Mel-frequency Cepstral Coefficients (MFCC) method and using the Learning Vector Quantization Neural Network (JST-LVQ) method as the classification method of the voice extraction result.By using LVQ, the accuracy in recognition the speaker’s voice is pretty good. The use of LVQ method produces best accuracy at 73,33% to recognize the speaker that with the same sentence, and 46,67% for different sentence. So the results obtained in accordance with the expected.","PeriodicalId":302621,"journal":{"name":"2018 IEEE International Conference on Internet of Things and Intelligence System (IOTAIS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE International Conference on Internet of Things and Intelligence System (IOTAIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IOTAIS.2018.8600852","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
Presently, Biometric features are often used to identify suspects in law enforcement processes. One of these biometric features is Speaker Recognition. Speaker recognition is used to discriminate people by their voice. In this study, the problem that can be solved is how to classify audio sample that exist on the evidence with the voice of the suspect.In this final project is made a application’s prototype that can be used to classify and in that case will be done speaker recognition technique (Speaker Recognition) to be able to classify the speaker’s voice in the evidence and the voice of the suspect. The stages used to compare the sound is by extracting the sound features using the Mel-frequency Cepstral Coefficients (MFCC) method and using the Learning Vector Quantization Neural Network (JST-LVQ) method as the classification method of the voice extraction result.By using LVQ, the accuracy in recognition the speaker’s voice is pretty good. The use of LVQ method produces best accuracy at 73,33% to recognize the speaker that with the same sentence, and 46,67% for different sentence. So the results obtained in accordance with the expected.