{"title":"MFAE: Masked frame-level autoencoder with hybrid-supervision for low-resource music transcription","authors":"Yulun Wu, Jiahao Zhao, Yi Yu, Wei Li","doi":"10.1109/ICME55011.2023.00194","DOIUrl":null,"url":null,"abstract":"Automantic Music Transcription (AMT) is an essential topic in music information retrieval (MIR), and it aims to transcribe audio recordings into symbolic representations. Recently, large-scale piano datasets with high-quality notations have been proposed for high-resolution piano transcription, which resulted in domain-specific AMT models achieved state-of- the-art results. However, those methods are hardly generalized to other ’low-resource’ instruments (such as guitar, cello, clarinet, etc.) transcription. In this paper, we propose a hybrid-supervised framework, the masked frame-level autoencoder (MFAE), to solve this issue. The proposed MFAE reconstructs the frame-level features of low-resource data to understand generic representations of low-resource instruments and improves low-resource transcription performance. Experimental results on several low- resource datasets (MAPS, MusicNet, and Guitarset) show that our framework achieves state-of-the-art performance in note-wise scores (Note F1 83.4%\\64.1%\\86.7%, Note-with-offset F1 59.8%\\41.4%\\71.6%). Moreover, our framework can be well generalized to various genres of instrument transcription, both in data-plentiful and data-limited scenarios.","PeriodicalId":321830,"journal":{"name":"2023 IEEE International Conference on Multimedia and Expo (ICME)","volume":"153 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Conference on Multimedia and Expo (ICME)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICME55011.2023.00194","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Automantic Music Transcription (AMT) is an essential topic in music information retrieval (MIR), and it aims to transcribe audio recordings into symbolic representations. Recently, large-scale piano datasets with high-quality notations have been proposed for high-resolution piano transcription, which resulted in domain-specific AMT models achieved state-of- the-art results. However, those methods are hardly generalized to other ’low-resource’ instruments (such as guitar, cello, clarinet, etc.) transcription. In this paper, we propose a hybrid-supervised framework, the masked frame-level autoencoder (MFAE), to solve this issue. The proposed MFAE reconstructs the frame-level features of low-resource data to understand generic representations of low-resource instruments and improves low-resource transcription performance. Experimental results on several low- resource datasets (MAPS, MusicNet, and Guitarset) show that our framework achieves state-of-the-art performance in note-wise scores (Note F1 83.4%\64.1%\86.7%, Note-with-offset F1 59.8%\41.4%\71.6%). Moreover, our framework can be well generalized to various genres of instrument transcription, both in data-plentiful and data-limited scenarios.