{"title":"基于深度学习的助听器声场景分类","authors":"VS Vivek, S. Vidhya, P. MadhanMohan","doi":"10.1109/ICCSP48568.2020.9182160","DOIUrl":null,"url":null,"abstract":"Different audio environments require different settings in hearing aid to acquire high-quality speech. Manual tuning of hearing aid settings can be irritating. Thus, hearing aids can be provided with options and settings that can be tuned based on the audio environment. In this paper we provide a simple sound classification system that could be used to automatically switch between various hearing aid algorithms based on the auditory related scene. Features like MFCC, Mel-spectrogram, Chroma, Spectral contrast and Tonnetz are extracted from several hours of audio from five classes like “music,” “noise,” “speech with noise,” “silence,” and “clean speech” for training and testing the network. Using these features audio is processed by the convolution neural network. We show that our system accomplishes high precision with just three to five second duration per scene. The algorithm is efficient and consumes less memory footprint. It is possible to implement the system in digital hearing aid.","PeriodicalId":321133,"journal":{"name":"2020 International Conference on Communication and Signal Processing (ICCSP)","volume":"98 ","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"Acoustic Scene Classification in Hearing aid using Deep Learning\",\"authors\":\"VS Vivek, S. Vidhya, P. MadhanMohan\",\"doi\":\"10.1109/ICCSP48568.2020.9182160\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Different audio environments require different settings in hearing aid to acquire high-quality speech. Manual tuning of hearing aid settings can be irritating. Thus, hearing aids can be provided with options and settings that can be tuned based on the audio environment. In this paper we provide a simple sound classification system that could be used to automatically switch between various hearing aid algorithms based on the auditory related scene. Features like MFCC, Mel-spectrogram, Chroma, Spectral contrast and Tonnetz are extracted from several hours of audio from five classes like “music,” “noise,” “speech with noise,” “silence,” and “clean speech” for training and testing the network. Using these features audio is processed by the convolution neural network. We show that our system accomplishes high precision with just three to five second duration per scene. The algorithm is efficient and consumes less memory footprint. It is possible to implement the system in digital hearing aid.\",\"PeriodicalId\":321133,\"journal\":{\"name\":\"2020 International Conference on Communication and Signal Processing (ICCSP)\",\"volume\":\"98 \",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 International Conference on Communication and Signal Processing (ICCSP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCSP48568.2020.9182160\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 International Conference on Communication and Signal Processing (ICCSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCSP48568.2020.9182160","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Acoustic Scene Classification in Hearing aid using Deep Learning
Different audio environments require different settings in hearing aid to acquire high-quality speech. Manual tuning of hearing aid settings can be irritating. Thus, hearing aids can be provided with options and settings that can be tuned based on the audio environment. In this paper we provide a simple sound classification system that could be used to automatically switch between various hearing aid algorithms based on the auditory related scene. Features like MFCC, Mel-spectrogram, Chroma, Spectral contrast and Tonnetz are extracted from several hours of audio from five classes like “music,” “noise,” “speech with noise,” “silence,” and “clean speech” for training and testing the network. Using these features audio is processed by the convolution neural network. We show that our system accomplishes high precision with just three to five second duration per scene. The algorithm is efficient and consumes less memory footprint. It is possible to implement the system in digital hearing aid.