{"title":"Smart Carnatic Music Note Identification (CMNI) System using Probabilistic Neural Network","authors":"S. Ramya","doi":"10.1109/ICSSIT46314.2019.8987960","DOIUrl":null,"url":null,"abstract":"Music is defined as an art which arranges the sounds to provide the inner feeling of happiness. Carnatic music is based on a Raga (Tune), Bhava (Emotion), Thala (Rhythm) and also characterized by Saptha swara (musical note), Sthayi and reference note (Shruthi). In this proposed work, an attempt is made to identify the swaras in Madhya Sthayi and constant Shruthi. The recorded music samples are analyzed in the frequency domain by using digital signal processing. The features are extracted and input to the probabilistic neural-network for the note identification. The performance of the system is verified for 25 samples for each note. The system success rate is above 90%.","PeriodicalId":330309,"journal":{"name":"2019 International Conference on Smart Systems and Inventive Technology (ICSSIT)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 International Conference on Smart Systems and Inventive Technology (ICSSIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSSIT46314.2019.8987960","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Music is defined as an art which arranges the sounds to provide the inner feeling of happiness. Carnatic music is based on a Raga (Tune), Bhava (Emotion), Thala (Rhythm) and also characterized by Saptha swara (musical note), Sthayi and reference note (Shruthi). In this proposed work, an attempt is made to identify the swaras in Madhya Sthayi and constant Shruthi. The recorded music samples are analyzed in the frequency domain by using digital signal processing. The features are extracted and input to the probabilistic neural-network for the note identification. The performance of the system is verified for 25 samples for each note. The system success rate is above 90%.