IEEE/ACM Transactions on Audio, Speech, and Language Processing最新文献

筛选
英文 中文
Graph-Based Cross-Granularity Message Passing on Knowledge-Intensive Text 基于图的知识密集型文本跨粒度信息传递
IF 4.1 2区 计算机科学
IEEE/ACM Transactions on Audio, Speech, and Language Processing Pub Date : 2024-10-02 DOI: 10.1109/TASLP.2024.3473308
Chenwei Yan;Xiangling Fu;Xinxin You;Ji Wu;Xien Liu
{"title":"Graph-Based Cross-Granularity Message Passing on Knowledge-Intensive Text","authors":"Chenwei Yan;Xiangling Fu;Xinxin You;Ji Wu;Xien Liu","doi":"10.1109/TASLP.2024.3473308","DOIUrl":"https://doi.org/10.1109/TASLP.2024.3473308","url":null,"abstract":"In knowledge-intensive fields such as medicine, the text often contains numerous professional terms, specific text fragments, and multidimensional information. However, most existing text representation methods ignore this specialized knowledge and instead adopt methods similar to those used in the general domain. In this paper, we focus on developing a learning module to enhance the representation ability of knowledge-intensive text by leveraging a graph-based cross-granularity message passing mechanism. To this end, we propose a novel learning framework, the \u0000<bold>M</b>\u0000ulti-\u0000<bold>G</b>\u0000ranularity \u0000<bold>G</b>\u0000raph \u0000<bold>N</b>\u0000eural \u0000<bold>N</b>\u0000etwork (MG-GNN), to integrate fine-grained and coarse-grained knowledge at the character, word, and phase levels. The MG-GNN performs learning in two stages: 1) inter-granularity learning and 2) intra-granularity learning. During inter-granularity learning, semantic knowledge is extracted from character, word, and phrase granularity graphs, whereas intra-granularity learning focuses on fusing knowledge across different granularity graphs to achieve comprehensive message integration. To enhance the fusion performance, we propose a context-based gating mechanism to guide cross-graph propagation learning. Furthermore, we apply MG-GNN to address two important medical applications. Experimental results demonstrate that our proposed MG-GNN model significantly enhances the performance in both diagnosis prediction and medical named entity recognition tasks.","PeriodicalId":13332,"journal":{"name":"IEEE/ACM Transactions on Audio, Speech, and Language Processing","volume":"32 ","pages":"4409-4419"},"PeriodicalIF":4.1,"publicationDate":"2024-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142430820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Utterance Conditioned VAE for Speech Generation 用于语音生成的交叉共振条件 VAE
IF 4.1 2区 计算机科学
IEEE/ACM Transactions on Audio, Speech, and Language Processing Pub Date : 2024-09-30 DOI: 10.1109/TASLP.2024.3453598
Yang Li;Cheng Yu;Guangzhi Sun;Weiqin Zu;Zheng Tian;Ying Wen;Wei Pan;Chao Zhang;Jun Wang;Yang Yang;Fanglei Sun
{"title":"Cross-Utterance Conditioned VAE for Speech Generation","authors":"Yang Li;Cheng Yu;Guangzhi Sun;Weiqin Zu;Zheng Tian;Ying Wen;Wei Pan;Chao Zhang;Jun Wang;Yang Yang;Fanglei Sun","doi":"10.1109/TASLP.2024.3453598","DOIUrl":"https://doi.org/10.1109/TASLP.2024.3453598","url":null,"abstract":"Speech synthesis systems powered by neural networks hold promise for multimedia production, but frequently face issues with producing expressive speech and seamless editing. In response, we present the Cross-Utterance Conditioned Variational Autoencoder speech synthesis (CUC-VAE S2) framework to enhance prosody and ensure natural speech generation. This framework leverages the powerful representational capabilities of pre-trained language models and the re-expression abilities of variational autoencoders (VAEs). The core component of the CUC-VAE S2 framework is the cross-utterance CVAE, which extracts acoustic, speaker, and textual features from surrounding sentences to generate context-sensitive prosodic features, more accurately emulating human prosody generation. We further propose two practical algorithms tailored for distinct speech synthesis applications: CUC-VAE TTS for text-to-speech and CUC-VAE SE for speech editing. The CUC-VAE TTS is a direct application of the framework, designed to generate audio with contextual prosody derived from surrounding texts. On the other hand, the CUC-VAE SE algorithm leverages real mel spectrogram sampling conditioned on contextual information, producing audio that closely mirrors real sound and thereby facilitating flexible speech editing based on text such as deletion, insertion, and replacement. Experimental results on the LibriTTS datasets demonstrate that our proposed models significantly enhance speech synthesis and editing, producing more natural and expressive speech.","PeriodicalId":13332,"journal":{"name":"IEEE/ACM Transactions on Audio, Speech, and Language Processing","volume":"32 ","pages":"4263-4276"},"PeriodicalIF":4.1,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142359710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross Domain Optimization for Speech Enhancement: Parallel or Cascade? 语音增强的跨域优化:并行还是级联?
IF 4.1 2区 计算机科学
IEEE/ACM Transactions on Audio, Speech, and Language Processing Pub Date : 2024-09-26 DOI: 10.1109/TASLP.2024.3468026
Liang Wan;Hongqing Liu;Liming Shi;Yi Zhou;Lu Gan
{"title":"Cross Domain Optimization for Speech Enhancement: Parallel or Cascade?","authors":"Liang Wan;Hongqing Liu;Liming Shi;Yi Zhou;Lu Gan","doi":"10.1109/TASLP.2024.3468026","DOIUrl":"https://doi.org/10.1109/TASLP.2024.3468026","url":null,"abstract":"This paper introduces five novel deep-learning architectures for speech enhancement. Existing methods typically use time-domain, time-frequency representations, or a hybrid approach. Recognizing the unique contributions of each domain to feature extraction and model design, this study investigates the integration of waveform and complex spectrogram models through cross-domain fusion to enhance speech feature learning and noise reduction, thereby improving speech quality. We examine both cascading and parallel configurations of waveform and complex spectrogram models to assess their effectiveness in speech enhancement. Additionally, we employ an orthogonal projection-based error decomposition technique and manage the inputs of individual sub-models to analyze factors affecting speech quality. The network is trained by optimizing three specific loss functions applied across all sub-models. Our experiments, using the DNS Challenge (ICASSP 2021) dataset, reveal that the proposed models surpass existing benchmarks in speech enhancement, offering superior speech quality and intelligibility. These results highlight the efficacy of our cross-domain fusion strategy.","PeriodicalId":13332,"journal":{"name":"IEEE/ACM Transactions on Audio, Speech, and Language Processing","volume":"32 ","pages":"4328-4341"},"PeriodicalIF":4.1,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142376622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sound Field Estimation Based on Physics-Constrained Kernel Interpolation Adapted to Environment 基于适应环境的物理约束核插值的声场估计
IF 4.1 2区 计算机科学
IEEE/ACM Transactions on Audio, Speech, and Language Processing Pub Date : 2024-09-25 DOI: 10.1109/TASLP.2024.3467951
Juliano G. C. Ribeiro;Shoichi Koyama;Ryosuke Horiuchi;Hiroshi Saruwatari
{"title":"Sound Field Estimation Based on Physics-Constrained Kernel Interpolation Adapted to Environment","authors":"Juliano G. C. Ribeiro;Shoichi Koyama;Ryosuke Horiuchi;Hiroshi Saruwatari","doi":"10.1109/TASLP.2024.3467951","DOIUrl":"https://doi.org/10.1109/TASLP.2024.3467951","url":null,"abstract":"A sound field estimation method based on kernel interpolation with an adaptive kernel function is proposed. The kernel-interpolation-based sound field estimation methods enable physics-constrained interpolation from pressure measurements of distributed microphones with a linear estimator, which constrains interpolation functions to satisfy the Helmholtz equation. However, a fixed kernel function would not be capable of adapting to the acoustic environment in which the measurement is performed, limiting their applicability. To make the kernel function adaptive, we represent it with a sum of directed and residual trainable kernel functions. The directed kernel is defined by a weight function composed of a superposition of exponential functions to capture highly directional components. The weight function for the residual kernel is represented by neural networks to capture unpredictable spatial patterns of the residual components. Experimental results using simulated and real data indicate that the proposed method outperforms the current kernel-interpolation-based methods and a method based on physics-informed neural networks.","PeriodicalId":13332,"journal":{"name":"IEEE/ACM Transactions on Audio, Speech, and Language Processing","volume":"32 ","pages":"4369-4383"},"PeriodicalIF":4.1,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10693558","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142430884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Investigation of Time-Frequency Representation Discriminators for High-Fidelity Vocoders 高保真声码器的时频表示判别器研究
IF 4.1 2区 计算机科学
IEEE/ACM Transactions on Audio, Speech, and Language Processing Pub Date : 2024-09-25 DOI: 10.1109/TASLP.2024.3468005
Yicheng Gu;Xueyao Zhang;Liumeng Xue;Haizhou Li;Zhizheng Wu
{"title":"An Investigation of Time-Frequency Representation Discriminators for High-Fidelity Vocoders","authors":"Yicheng Gu;Xueyao Zhang;Liumeng Xue;Haizhou Li;Zhizheng Wu","doi":"10.1109/TASLP.2024.3468005","DOIUrl":"https://doi.org/10.1109/TASLP.2024.3468005","url":null,"abstract":"Generative Adversarial Network (GAN) based vocoders are superior in both inference speed and synthesis quality when reconstructing an audible waveform from an acoustic representation. This study focuses on improving the discriminator for GAN-based vocoders. Most existing Time-Frequency Representation (TFR)-based discriminators are rooted in Short-Time Fourier Transform (STFT), which owns a constant Time-Frequency (TF) resolution, linearly scaled center frequencies, and a fixed decomposition basis, making it incompatible with signals like singing voices that require dynamic attention for different frequency bands and different time intervals. Motivated by that, we propose a Multi-Scale Sub-Band Constant-Q Transform CQT (MS-SB-CQT) discriminator and a Multi-Scale Temporal-Compressed Continuous Wavelet Transform CWT (MS-TC-CWT) discriminator. Both CQT and CWT have a dynamic TF resolution for different frequency bands. In contrast, CQT has a better modeling ability in pitch information, and CWT has a better modeling ability in short-time transients. Experiments conducted on both speech and singing voices confirm the effectiveness of our proposed discriminators. Moreover, the STFT, CQT, and CWT-based discriminators can be used jointly for better performance. The proposed discriminators can boost the synthesis quality of various state-of-the-art GAN-based vocoders, including HiFi-GAN, BigVGAN, and APNet.","PeriodicalId":13332,"journal":{"name":"IEEE/ACM Transactions on Audio, Speech, and Language Processing","volume":"32 ","pages":"4569-4579"},"PeriodicalIF":4.1,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142518150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Three-Dimensional Room Transfer Function Parameterization Based on Multiple Concentric Planar Circular Arrays 基于多同心平面圆阵列的三维室内传递函数参数化
IF 4.1 2区 计算机科学
IEEE/ACM Transactions on Audio, Speech, and Language Processing Pub Date : 2024-09-25 DOI: 10.1109/TASLP.2024.3468025
Lu Li;Maoshen Jia;Changchun Bao
{"title":"Three-Dimensional Room Transfer Function Parameterization Based on Multiple Concentric Planar Circular Arrays","authors":"Lu Li;Maoshen Jia;Changchun Bao","doi":"10.1109/TASLP.2024.3468025","DOIUrl":"https://doi.org/10.1109/TASLP.2024.3468025","url":null,"abstract":"This study proposes a three-dimensional room transfer function (RTF) parameterization method based on multiple concentric planar circular arrays, which exhibits robustness to variations in the positions of both the receiver and source. According to the harmonic solution to the wave equation, the RTFs between two spherical regions (sound source and receiver) in a room can be expressed as a weighted sum of spherical harmonics, whose weight coefficients serve as the RTF parameters, which can be estimated by placing multiple concentric planar circular arrays composed of monopole-source pairs (MSPs) and multiple concentric planar circular arrays composed of omnidirectional-microphone pairs (OMPs) in respective source and receiver regions. We use MSP arrays to generate required outgoing soundfields originating from a source region. We derive a method to use OMP arrays to estimate RTF parameters that are concealed within the captured soundfield, which can be employed to reconstruct the RTF from any point in the source region to any point in the receiver region. The accuracy of the RTF parameterization method is validated through simulation testing.","PeriodicalId":13332,"journal":{"name":"IEEE/ACM Transactions on Audio, Speech, and Language Processing","volume":"32 ","pages":"4384-4398"},"PeriodicalIF":4.1,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142430805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the Quantization of Neural Models for Speaker Verification 论用于验证说话人的神经模型量化
IF 4.1 2区 计算机科学
IEEE/ACM Transactions on Audio, Speech, and Language Processing Pub Date : 2024-09-20 DOI: 10.1109/TASLP.2024.3463430
Vishal Kumar;Vinayak Abrol;Mathew Magamai Doss
{"title":"On the Quantization of Neural Models for Speaker Verification","authors":"Vishal Kumar;Vinayak Abrol;Mathew Magamai Doss","doi":"10.1109/TASLP.2024.3463430","DOIUrl":"https://doi.org/10.1109/TASLP.2024.3463430","url":null,"abstract":"This paper addresses the sub-optimality of current post-training quantization (PTQ) and quantization-aware training (QAT) methods for state-of-the-art speaker verification (SV) models featuring intricate architectural elements such as channel aggregation and squeeze excitation modules. To address these limitations, we propose 1) a data-independent PTQ technique employing iterative low-precision calibration on pre-trained models; and 2) a data-dependent QAT method designed to reduce the performance gap between full-precision and integer models. Our QAT involves two progressive stages where FP-32 weights are initially transformed into FP-8, adapting precision based on the gradient norm, followed by the learning of quantizer parameters (scale and zero-point) for INT8 conversion. Experimental validation underscores the ingenuity of our method in model quantization, demonstrating reduced floating-point operations and INT8 inference time, all while maintaining performance on par with full-precision models.","PeriodicalId":13332,"journal":{"name":"IEEE/ACM Transactions on Audio, Speech, and Language Processing","volume":"32 ","pages":"4226-4236"},"PeriodicalIF":4.1,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142328387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bayesian Parameter-Efficient Fine-Tuning for Overcoming Catastrophic Forgetting 克服灾难性遗忘的贝叶斯参数高效微调技术
IF 4.1 2区 计算机科学
IEEE/ACM Transactions on Audio, Speech, and Language Processing Pub Date : 2024-09-18 DOI: 10.1109/TASLP.2024.3463395
Haolin Chen;Philip N. Garner
{"title":"Bayesian Parameter-Efficient Fine-Tuning for Overcoming Catastrophic Forgetting","authors":"Haolin Chen;Philip N. Garner","doi":"10.1109/TASLP.2024.3463395","DOIUrl":"10.1109/TASLP.2024.3463395","url":null,"abstract":"We are motivated primarily by the adaptation of text-to-speech synthesis models; however we argue that more generic parameter-efficient fine-tuning (PEFT) is an appropriate framework to do such adaptation. Nevertheless, catastrophic forgetting remains an issue with PEFT, damaging the pre-trained model's inherent capabilities. We demonstrate that existing Bayesian learning techniques can be applied to PEFT to prevent catastrophic forgetting as long as the parameter shift of the fine-tuned layers can be calculated differentiably. In a principled series of experiments on language modeling and speech synthesis tasks, we utilize established Laplace approximations, including diagonal and Kronecker-factored approaches, to regularize PEFT with the low-rank adaptation (LoRA) and compare their performance in pre-training knowledge preservation. Our results demonstrate that catastrophic forgetting can be overcome by our methods without degrading the fine-tuning performance, and using the Kronecker-factored approximation produces a better preservation of the pre-training knowledge than the diagonal ones.","PeriodicalId":13332,"journal":{"name":"IEEE/ACM Transactions on Audio, Speech, and Language Processing","volume":"32 ","pages":"4253-4262"},"PeriodicalIF":4.1,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10683983","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NeuroHeed: Neuro-Steered Speaker Extraction Using EEG Signals NeuroHeed:使用脑电信号的神经分层扬声器提取技术
IF 4.1 2区 计算机科学
IEEE/ACM Transactions on Audio, Speech, and Language Processing Pub Date : 2024-09-18 DOI: 10.1109/TASLP.2024.3463498
Zexu Pan;Marvin Borsdorf;Siqi Cai;Tanja Schultz;Haizhou Li
{"title":"NeuroHeed: Neuro-Steered Speaker Extraction Using EEG Signals","authors":"Zexu Pan;Marvin Borsdorf;Siqi Cai;Tanja Schultz;Haizhou Li","doi":"10.1109/TASLP.2024.3463498","DOIUrl":"10.1109/TASLP.2024.3463498","url":null,"abstract":"Humans possess the remarkable ability to selectively attend to a single speaker amidst competing voices and background noise, known as \u0000<italic>selective auditory attention</i>\u0000. Recent studies in auditory neuroscience indicate a strong correlation between the attended speech signal and the corresponding brain's elicited neuronal activities. In this work, we study such brain activities measured using affordable and non-intrusive electroencephalography (EEG) devices. We present NeuroHeed, a speaker extraction model that leverages the listener's synchronized EEG signals to extract the attended speech signal in a cocktail party scenario, in which the extraction process is conditioned on a neuronal attractor encoded from the EEG signal. We propose both an offline and an online NeuroHeed, with the latter designed for real-time inference. In the online NeuroHeed, we additionally propose an autoregressive speaker encoder, which accumulates past extracted speech signals for self-enrollment of the attended speaker information into an auditory attractor, that retains the attentional momentum over time. Online NeuroHeed extracts the current window of the speech signals with guidance from both attractors. Experimental results on KUL dataset two-speaker scenario demonstrate that NeuroHeed effectively extracts brain-attended speech signals with an average scale-invariant signal-to-noise ratio improvement (SI-SDRi) of 14.3 dB and extraction accuracy of 90.8% in offline settings, and SI-SDRi of 11.2 dB and extraction accuracy of 85.1% in online settings.","PeriodicalId":13332,"journal":{"name":"IEEE/ACM Transactions on Audio, Speech, and Language Processing","volume":"32 ","pages":"4456-4470"},"PeriodicalIF":4.1,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10683957","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Detection of Speech Sound Disorder in Cantonese-Speaking Pre-School Children 自动检测粤语学龄前儿童的语音障碍
IF 4.1 2区 计算机科学
IEEE/ACM Transactions on Audio, Speech, and Language Processing Pub Date : 2024-09-18 DOI: 10.1109/TASLP.2024.3463503
Si-Ioi Ng;Cymie Wing-Yee Ng;Jiarui Wang;Tan Lee
{"title":"Automatic Detection of Speech Sound Disorder in Cantonese-Speaking Pre-School Children","authors":"Si-Ioi Ng;Cymie Wing-Yee Ng;Jiarui Wang;Tan Lee","doi":"10.1109/TASLP.2024.3463503","DOIUrl":"10.1109/TASLP.2024.3463503","url":null,"abstract":"Speech sound disorder (SSD) is a type of developmental disorder in which children encounter persistent difficulties in correctly producing certain speech sounds. Conventionally, assessment of SSD relies largely on speech and language pathologists (SLPs) with appropriate language background. With the unsatisfied demand for qualified SLPs, automatic detection of SSD is highly desirable for assisting clinical work and improving the efficiency and quality of services. In this paper, methods and systems for fully automatic detection of SSD in young children are investigated. A microscopic approach and a macroscopic approach are developed. The microscopic system is based on detection of phonological errors in impaired child speech. A deep neural network (DNN) model is trained to learn the similarity and contrast between consonant segments. Phonological error is identified by contrasting a test speech segment to reference segments. The phone-level similarity scores are aggregated for speaker-level SSD detection. The macroscopic approach leverages holistic changes of speech characteristics related to disorders. Various types of speaker-level embeddings are investigated and compared. Experimental results show that the proposed microscopic system achieves unweighted average recall (UAR) from 84.0% to 91.9% on phone-level error detection. The proposed macroscopic approach can achieve a UAR of 89.0% on speaker-level SSD detection. The speaker embeddings adopted for macroscopic SSD detection can effectively discard the information related to speaker's personal identity.","PeriodicalId":13332,"journal":{"name":"IEEE/ACM Transactions on Audio, Speech, and Language Processing","volume":"32 ","pages":"4355-4368"},"PeriodicalIF":4.1,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信