{"title":"An Improvement on Audio-to-MIDI Alignment Using Triplet Pair","authors":"Yifan Wang, Shuchang Liu, Li Guo","doi":"10.1145/3347450.3357661","DOIUrl":null,"url":null,"abstract":"In this paper, we employ a neural network based cross-modality model on audio-to-MIDI alignment task. A novel loss function based on Hinge Loss is proposed to optimize the model learning an Euclidean embedding space, where the distance of embedding vectors can be directly used as a measure of similarity in alignment. In the previous alignment system also based on cross-modality model, there are positive and negative pairs in the loss function, which represent aligned and misaligned pairs. In this paper, we introduce an extra pair named overlapping to capture musical onset information. We evaluate our system on the MAPS dataset and compare it to other previous methods. The results reveal that the align accuracy of the proposed system beats the transcription based method by a significant margin, e.g., 81.61% to 86.41%, when the align error threshold is set to 10 ms. And the proposed loss also has an improvement on the statistics of absolute onset errors in comparison to the loss function implemented in other audio-to-MIDI alignment system. We also conduct experiments on the dimension of embedding vectors and results show the proposed system can still maintain the alignment performance with lower dimension.","PeriodicalId":329495,"journal":{"name":"1st International Workshop on Multimodal Understanding and Learning for Embodied Applications","volume":"87 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"1st International Workshop on Multimodal Understanding and Learning for Embodied Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3347450.3357661","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In this paper, we employ a neural network based cross-modality model on audio-to-MIDI alignment task. A novel loss function based on Hinge Loss is proposed to optimize the model learning an Euclidean embedding space, where the distance of embedding vectors can be directly used as a measure of similarity in alignment. In the previous alignment system also based on cross-modality model, there are positive and negative pairs in the loss function, which represent aligned and misaligned pairs. In this paper, we introduce an extra pair named overlapping to capture musical onset information. We evaluate our system on the MAPS dataset and compare it to other previous methods. The results reveal that the align accuracy of the proposed system beats the transcription based method by a significant margin, e.g., 81.61% to 86.41%, when the align error threshold is set to 10 ms. And the proposed loss also has an improvement on the statistics of absolute onset errors in comparison to the loss function implemented in other audio-to-MIDI alignment system. We also conduct experiments on the dimension of embedding vectors and results show the proposed system can still maintain the alignment performance with lower dimension.