Densely Connected Dilated Convolutions with Time-Frequency Attention for Speech Enhancement

Manaswini Burra, Pavan Kumar Reddy Yerva, Balaji Eemani, Abhinash Sunkara
{"title":"Densely Connected Dilated Convolutions with Time-Frequency Attention for Speech Enhancement","authors":"Manaswini Burra, Pavan Kumar Reddy Yerva, Balaji Eemani, Abhinash Sunkara","doi":"10.1109/ICAAIC56838.2023.10140871","DOIUrl":null,"url":null,"abstract":"This research study has proposed a Dilated Dense Time Frequency Attention Autoencoder (DDTFAAEC) model to perform real-time speech enhancement. The proposed model consists of a fully convolutional neural networks with time frequency attention (TFA). TFA blocks have been followed by the convolutional and dense layers in the decoder and encoder. By combining feature reuse, deeper networks, and maximal context aggregation, dense blocks and attention modules are used to assist in the process of feature extraction. TFA mechanism is designed to learn important information with respect to time, channel and frequency in Convolutional Neural Networks (CNN). At different resolutions, the context aggregation is achieved by using the dilated convolutions. To avoid the information flow from future frames, casual convolutions are used, therefore the network will be made applicable for the real-time applications. This research study utilizes the sub-pixel convolutional layers in the decoder for the purpose of upsampling. In terms of quality scores and objective intelligibility, the experimental result outperforms the already used methods.","PeriodicalId":267906,"journal":{"name":"2023 2nd International Conference on Applied Artificial Intelligence and Computing (ICAAIC)","volume":"49 6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 2nd International Conference on Applied Artificial Intelligence and Computing (ICAAIC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAAIC56838.2023.10140871","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This research study has proposed a Dilated Dense Time Frequency Attention Autoencoder (DDTFAAEC) model to perform real-time speech enhancement. The proposed model consists of a fully convolutional neural networks with time frequency attention (TFA). TFA blocks have been followed by the convolutional and dense layers in the decoder and encoder. By combining feature reuse, deeper networks, and maximal context aggregation, dense blocks and attention modules are used to assist in the process of feature extraction. TFA mechanism is designed to learn important information with respect to time, channel and frequency in Convolutional Neural Networks (CNN). At different resolutions, the context aggregation is achieved by using the dilated convolutions. To avoid the information flow from future frames, casual convolutions are used, therefore the network will be made applicable for the real-time applications. This research study utilizes the sub-pixel convolutional layers in the decoder for the purpose of upsampling. In terms of quality scores and objective intelligibility, the experimental result outperforms the already used methods.
基于时频关注的密集连接扩张卷积语音增强
本研究提出了一种扩展密集时频注意自编码器(DDTFAAEC)模型来进行实时语音增强。该模型由具有时频注意(TFA)的全卷积神经网络组成。TFA块之后是解码器和编码器中的卷积层和密集层。将特征重用、深度网络和最大上下文聚合相结合,利用密集块和关注模块辅助特征提取。TFA机制旨在学习卷积神经网络(CNN)中关于时间、信道和频率的重要信息。在不同的分辨率下,上下文聚合是通过使用扩展卷积来实现的。为了避免来自未来帧的信息流,使用了随机卷积,因此网络将适用于实时应用。本研究利用解码器中的亚像素卷积层进行上采样。在质量分数和客观可理解性方面,实验结果优于已有的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信