Louis Pisha, Sean Hamilton, Dhiman Sengupta, Ching-Hua Lee, Krishna Chaithanya Vastare, Tamara Zubatiy, Sergio Luna, Cagri Yalcin, Alex Grant, Rajesh Gupta, Ganz Chockalingam, Bhaskar D Rao, Harinath Garudadri
{"title":"用于增强听力研究的可穿戴平台。","authors":"Louis Pisha, Sean Hamilton, Dhiman Sengupta, Ching-Hua Lee, Krishna Chaithanya Vastare, Tamara Zubatiy, Sergio Luna, Cagri Yalcin, Alex Grant, Rajesh Gupta, Ganz Chockalingam, Bhaskar D Rao, Harinath Garudadri","doi":"10.1109/ACSSC.2018.8645557","DOIUrl":null,"url":null,"abstract":"<p><p>We have previously reported a realtime, open-source speech-processing platform (OSP) for hearing aids (HAs) research. In this contribution, we describe a wearable version of this platform to facilitate audiological studies in the lab and in the field. The system is based on smartphone chipsets to leverage power efficiency in terms of FLOPS/watt and economies of scale. We present the system architecture and discuss salient design elements in support of HA research. The ear-level assemblies support up to 4 microphones on each ear, with 96 kHz, 24 bit codecs. The wearable unit runs OSP Release 2018c on top of 64-bit Debian Linux for binaural HA with an overall latency of 5.6 ms. The wearable unit also hosts an embedded web server (EWS) to monitor and control the HA state in realtime. We describe three example web apps in support of typical audiological studies they enable. Finally, we describe a baseline speech enhancement module included with Release 2018c, and describe extensions to the algorithms as future work.</p>","PeriodicalId":72692,"journal":{"name":"Conference record. Asilomar Conference on Signals, Systems & Computers","volume":"2018 ","pages":"223-227"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6677400/pdf/nihms-1035792.pdf","citationCount":"0","resultStr":"{\"title\":\"A Wearable Platform for Research in Augmented Hearing.\",\"authors\":\"Louis Pisha, Sean Hamilton, Dhiman Sengupta, Ching-Hua Lee, Krishna Chaithanya Vastare, Tamara Zubatiy, Sergio Luna, Cagri Yalcin, Alex Grant, Rajesh Gupta, Ganz Chockalingam, Bhaskar D Rao, Harinath Garudadri\",\"doi\":\"10.1109/ACSSC.2018.8645557\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>We have previously reported a realtime, open-source speech-processing platform (OSP) for hearing aids (HAs) research. In this contribution, we describe a wearable version of this platform to facilitate audiological studies in the lab and in the field. The system is based on smartphone chipsets to leverage power efficiency in terms of FLOPS/watt and economies of scale. We present the system architecture and discuss salient design elements in support of HA research. The ear-level assemblies support up to 4 microphones on each ear, with 96 kHz, 24 bit codecs. The wearable unit runs OSP Release 2018c on top of 64-bit Debian Linux for binaural HA with an overall latency of 5.6 ms. The wearable unit also hosts an embedded web server (EWS) to monitor and control the HA state in realtime. We describe three example web apps in support of typical audiological studies they enable. Finally, we describe a baseline speech enhancement module included with Release 2018c, and describe extensions to the algorithms as future work.</p>\",\"PeriodicalId\":72692,\"journal\":{\"name\":\"Conference record. Asilomar Conference on Signals, Systems & Computers\",\"volume\":\"2018 \",\"pages\":\"223-227\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6677400/pdf/nihms-1035792.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Conference record. Asilomar Conference on Signals, Systems & Computers\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ACSSC.2018.8645557\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2019/2/21 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Conference record. Asilomar Conference on Signals, Systems & Computers","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ACSSC.2018.8645557","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2019/2/21 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
A Wearable Platform for Research in Augmented Hearing.
We have previously reported a realtime, open-source speech-processing platform (OSP) for hearing aids (HAs) research. In this contribution, we describe a wearable version of this platform to facilitate audiological studies in the lab and in the field. The system is based on smartphone chipsets to leverage power efficiency in terms of FLOPS/watt and economies of scale. We present the system architecture and discuss salient design elements in support of HA research. The ear-level assemblies support up to 4 microphones on each ear, with 96 kHz, 24 bit codecs. The wearable unit runs OSP Release 2018c on top of 64-bit Debian Linux for binaural HA with an overall latency of 5.6 ms. The wearable unit also hosts an embedded web server (EWS) to monitor and control the HA state in realtime. We describe three example web apps in support of typical audiological studies they enable. Finally, we describe a baseline speech enhancement module included with Release 2018c, and describe extensions to the algorithms as future work.