{"title":"Efficient audio–visual information fusion using encoding pace synchronization for Audio–Visual Speech Separation","authors":"Xinmeng Xu , Weiping Tu , Yuhong Yang","doi":"10.1016/j.inffus.2024.102749","DOIUrl":null,"url":null,"abstract":"<div><div>Contemporary audio–visual speech separation (AVSS) models typically use encoders that merge audio and visual representations by concatenating them at a specific layer. This approach assumes that both modalities progress at the same pace and that information is adequately encoded at the chosen fusion layer. However, this assumption is often flawed due to inherent differences between the audio and visual modalities. In particular, the audio modality, being more directly tied to the final output (i.e., denoised speech), tends to converge faster than the visual modality. This discrepancy creates a persistent challenge in selecting the appropriate layer for fusion. To address this, we propose the Encoding Pace Synchronization Network (EPS-Net) for AVSS. EPS-Net allows for the independent encoding of the two modalities, enabling each to be processed at its own pace. At the same time, it establishes communication between the audio and visual modalities at corresponding encoding layers, progressively synchronizing their encoding speeds. This approach facilitates the gradual fusion of information while preserving the unique characteristics of each modality. The effectiveness of the proposed method has been validated through extensive experiments on the LRS2, LRS3, and VoxCeleb2 datasets, demonstrating superior performance over state-of-the-art methods.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"115 ","pages":"Article 102749"},"PeriodicalIF":14.7000,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S156625352400527X","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Contemporary audio–visual speech separation (AVSS) models typically use encoders that merge audio and visual representations by concatenating them at a specific layer. This approach assumes that both modalities progress at the same pace and that information is adequately encoded at the chosen fusion layer. However, this assumption is often flawed due to inherent differences between the audio and visual modalities. In particular, the audio modality, being more directly tied to the final output (i.e., denoised speech), tends to converge faster than the visual modality. This discrepancy creates a persistent challenge in selecting the appropriate layer for fusion. To address this, we propose the Encoding Pace Synchronization Network (EPS-Net) for AVSS. EPS-Net allows for the independent encoding of the two modalities, enabling each to be processed at its own pace. At the same time, it establishes communication between the audio and visual modalities at corresponding encoding layers, progressively synchronizing their encoding speeds. This approach facilitates the gradual fusion of information while preserving the unique characteristics of each modality. The effectiveness of the proposed method has been validated through extensive experiments on the LRS2, LRS3, and VoxCeleb2 datasets, demonstrating superior performance over state-of-the-art methods.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.