{"title":"使用深度神经网络和事件驱动听觉传感器的音频分类系统","authors":"Enea Ceolini, I. Kiselev, Shih-Chii Liu","doi":"10.1109/SENSORS43011.2019.8956592","DOIUrl":null,"url":null,"abstract":"We describe ongoing research in developing audio classification systems that use a spiking silicon cochlea as the front end. Event-driven features extracted from the spikes are fed to deep networks for the intended task. We describe a classification task on naturalistic audio sounds using a low-power silicon cochlea that outputs asynchronous events through a send-on-delta encoding of its sharply-tuned cochlea channels. Because of the event-driven nature of the processing, silences in these naturalistic sounds lead to corresponding absence of cochlea spikes and savings in computes. Results show 48% savings in computes with a small loss in accuracy using cochlea events.","PeriodicalId":6710,"journal":{"name":"2019 IEEE SENSORS","volume":"CE-30 1","pages":"1-4"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Audio classification systems using deep neural networks and an event-driven auditory sensor\",\"authors\":\"Enea Ceolini, I. Kiselev, Shih-Chii Liu\",\"doi\":\"10.1109/SENSORS43011.2019.8956592\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We describe ongoing research in developing audio classification systems that use a spiking silicon cochlea as the front end. Event-driven features extracted from the spikes are fed to deep networks for the intended task. We describe a classification task on naturalistic audio sounds using a low-power silicon cochlea that outputs asynchronous events through a send-on-delta encoding of its sharply-tuned cochlea channels. Because of the event-driven nature of the processing, silences in these naturalistic sounds lead to corresponding absence of cochlea spikes and savings in computes. Results show 48% savings in computes with a small loss in accuracy using cochlea events.\",\"PeriodicalId\":6710,\"journal\":{\"name\":\"2019 IEEE SENSORS\",\"volume\":\"CE-30 1\",\"pages\":\"1-4\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE SENSORS\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SENSORS43011.2019.8956592\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE SENSORS","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SENSORS43011.2019.8956592","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Audio classification systems using deep neural networks and an event-driven auditory sensor
We describe ongoing research in developing audio classification systems that use a spiking silicon cochlea as the front end. Event-driven features extracted from the spikes are fed to deep networks for the intended task. We describe a classification task on naturalistic audio sounds using a low-power silicon cochlea that outputs asynchronous events through a send-on-delta encoding of its sharply-tuned cochlea channels. Because of the event-driven nature of the processing, silences in these naturalistic sounds lead to corresponding absence of cochlea spikes and savings in computes. Results show 48% savings in computes with a small loss in accuracy using cochlea events.