Thanh Thi Hien Duong, Huu Manh Nguyen, Hai Nghiem Thi, Thi-Lan Le, Phi-Le Nguyen, Q. Nguyen
{"title":"Visual-guided audio source separation: an empirical study","authors":"Thanh Thi Hien Duong, Huu Manh Nguyen, Hai Nghiem Thi, Thi-Lan Le, Phi-Le Nguyen, Q. Nguyen","doi":"10.1109/MAPR53640.2021.9585244","DOIUrl":null,"url":null,"abstract":"Real-world video scenes are usually very complicated as they are mixtures of many different audio-visual objects. Humans with normal hearing ability can easily locate, identify and differentiate sound sources which are heard simultaneously. However, this is an extremely difficult task for machines as the creation of machine listening algorithms that can automatically separate sound sources in difficult mixing conditions has remained very challenging. In this paper, we consider the use of a visual-guided audio source separation approach for separating sounds of different instruments in the video, where detected visual objects are used to assist the sound separation process. We particularly investigate the use of different object detectors for the task. In addition, as an empirical study, we analyze the effect of training datasets on separation performance. Finally, experiment results obtained from a benchmark dataset MUSIC confirm the advantages of the new object detector investigated in the paper.","PeriodicalId":233540,"journal":{"name":"2021 International Conference on Multimedia Analysis and Pattern Recognition (MAPR)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Multimedia Analysis and Pattern Recognition (MAPR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MAPR53640.2021.9585244","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Real-world video scenes are usually very complicated as they are mixtures of many different audio-visual objects. Humans with normal hearing ability can easily locate, identify and differentiate sound sources which are heard simultaneously. However, this is an extremely difficult task for machines as the creation of machine listening algorithms that can automatically separate sound sources in difficult mixing conditions has remained very challenging. In this paper, we consider the use of a visual-guided audio source separation approach for separating sounds of different instruments in the video, where detected visual objects are used to assist the sound separation process. We particularly investigate the use of different object detectors for the task. In addition, as an empirical study, we analyze the effect of training datasets on separation performance. Finally, experiment results obtained from a benchmark dataset MUSIC confirm the advantages of the new object detector investigated in the paper.