Swarup Padhy, Juhi Tiwari, S. Rathore, Neetesh Kumar
{"title":"基于多通道卷积神经网络的听力损伤应急信号分类","authors":"Swarup Padhy, Juhi Tiwari, S. Rathore, Neetesh Kumar","doi":"10.1109/CICT48419.2019.9066252","DOIUrl":null,"url":null,"abstract":"Hearing impaired people have to tackle a lot of challenges, particularly during emergencies, making them dependent on others. The presence of emergency situations is mostly comprehended through auditory means. This raises a need for developing such systems that sense emergency sounds and communicate it to the deaf effectively. The present study is conducted to differentiate emergency audio signals from non-emergency situations using Multi-Channel Convolutional Neural Networks (CNN). Various data augmentation techniques have been explored, with particular attention to the method of Mixup, in order to improve the performance of the model. The experimental results showed a cross-validation accuracy of 88.28 % and testing accuracy of 88.09 %. To put the model into practical lives of the hearing impaired an android application was developed that made the phone vibrate every time there was an emergency sound. The app could be connected to an android wear device such as a smartwatch that will be with the wearer every time, effectively making them aware of emergency situations.","PeriodicalId":234540,"journal":{"name":"2019 IEEE Conference on Information and Communication Technology","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Emergency Signal Classification for the Hearing Impaired using Multi-channel Convolutional Neural Network Architecture\",\"authors\":\"Swarup Padhy, Juhi Tiwari, S. Rathore, Neetesh Kumar\",\"doi\":\"10.1109/CICT48419.2019.9066252\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Hearing impaired people have to tackle a lot of challenges, particularly during emergencies, making them dependent on others. The presence of emergency situations is mostly comprehended through auditory means. This raises a need for developing such systems that sense emergency sounds and communicate it to the deaf effectively. The present study is conducted to differentiate emergency audio signals from non-emergency situations using Multi-Channel Convolutional Neural Networks (CNN). Various data augmentation techniques have been explored, with particular attention to the method of Mixup, in order to improve the performance of the model. The experimental results showed a cross-validation accuracy of 88.28 % and testing accuracy of 88.09 %. To put the model into practical lives of the hearing impaired an android application was developed that made the phone vibrate every time there was an emergency sound. The app could be connected to an android wear device such as a smartwatch that will be with the wearer every time, effectively making them aware of emergency situations.\",\"PeriodicalId\":234540,\"journal\":{\"name\":\"2019 IEEE Conference on Information and Communication Technology\",\"volume\":\"4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE Conference on Information and Communication Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CICT48419.2019.9066252\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Conference on Information and Communication Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CICT48419.2019.9066252","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Emergency Signal Classification for the Hearing Impaired using Multi-channel Convolutional Neural Network Architecture
Hearing impaired people have to tackle a lot of challenges, particularly during emergencies, making them dependent on others. The presence of emergency situations is mostly comprehended through auditory means. This raises a need for developing such systems that sense emergency sounds and communicate it to the deaf effectively. The present study is conducted to differentiate emergency audio signals from non-emergency situations using Multi-Channel Convolutional Neural Networks (CNN). Various data augmentation techniques have been explored, with particular attention to the method of Mixup, in order to improve the performance of the model. The experimental results showed a cross-validation accuracy of 88.28 % and testing accuracy of 88.09 %. To put the model into practical lives of the hearing impaired an android application was developed that made the phone vibrate every time there was an emergency sound. The app could be connected to an android wear device such as a smartwatch that will be with the wearer every time, effectively making them aware of emergency situations.