A. Shaukat, Ammar Younis, M. Akram, M. Mohsin, Zartasha Mustansar
{"title":"Towards Automatic Recognition of Sounds Observed in Daily Living Activity","authors":"A. Shaukat, Ammar Younis, M. Akram, M. Mohsin, Zartasha Mustansar","doi":"10.1109/ICCICC46617.2019.9146067","DOIUrl":null,"url":null,"abstract":"An automated system is proposed to recognize different sounds from the daily living activity of humans. Such automated systems can assist the humans and caretakers to recognize any abnormal sound activity and take instant actions. The sound detection model is proposed, which recognizes sounds of the daily activity of an individual. Three Benchmark datasets are used to test our proposed model. The datasets used for our system are Real World Computing Partnership Sound Database in Real Acoustical Environment (RWCP-DB), Urban Sound8K and ESC10 data set. We used Linear Spectrogram, MFCC, Gamma tone Spectrogram as a base line for feature extraction using Convolution Neural Networks (CNN). We proposed two models based on CNN and CNN-SVM architecture and also trained Alex Net and Goggle Net using transfer learning. Our system performed well on different combinations of features and showed improved classification accuracy. Our system performed well in comparison with the other methods reported in literature.","PeriodicalId":294902,"journal":{"name":"2019 IEEE 18th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 18th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCICC46617.2019.9146067","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
An automated system is proposed to recognize different sounds from the daily living activity of humans. Such automated systems can assist the humans and caretakers to recognize any abnormal sound activity and take instant actions. The sound detection model is proposed, which recognizes sounds of the daily activity of an individual. Three Benchmark datasets are used to test our proposed model. The datasets used for our system are Real World Computing Partnership Sound Database in Real Acoustical Environment (RWCP-DB), Urban Sound8K and ESC10 data set. We used Linear Spectrogram, MFCC, Gamma tone Spectrogram as a base line for feature extraction using Convolution Neural Networks (CNN). We proposed two models based on CNN and CNN-SVM architecture and also trained Alex Net and Goggle Net using transfer learning. Our system performed well on different combinations of features and showed improved classification accuracy. Our system performed well in comparison with the other methods reported in literature.