Haruto Takeda, N. Saito, Tomoshi Otsuki, M. Nakai, H. Shimodaira, S. Sagayama
{"title":"Hidden Markov model for automatic transcription of MIDI signals","authors":"Haruto Takeda, N. Saito, Tomoshi Otsuki, M. Nakai, H. Shimodaira, S. Sagayama","doi":"10.1109/MMSP.2002.1203337","DOIUrl":null,"url":null,"abstract":"This paper describes a Hidden Markov Model (HMM)-based method of automatic transcription of MIDI (Musical Instrument Digital Interface) signals of performed music. The problem is formulated as recognition of a given sequence of fluctuating note durations to find the most likely intended note sequence utilizing the modern continuous speech recognition technique. Combining a stochastic model of deviating note durations and a stochastic grammar representing possible sequences of notes, the maximum likelihood estimate of the note sequence is searched in terms of Viterbi algorithm. The same principle is successfully applied to a joint problem of bar line allocation, time measure recognition, and tempo estimation. Finally, durations of consecutive /spl eta/n notes are combined to form a \"rhythm vector\" representing tempo-free relative durations of the notes and treated in the same framework. Significant improvements compared with conventional \"quantization\" techniques are shown.","PeriodicalId":398813,"journal":{"name":"2002 IEEE Workshop on Multimedia Signal Processing.","volume":"64 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2002-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"27","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2002 IEEE Workshop on Multimedia Signal Processing.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MMSP.2002.1203337","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 27
Abstract
This paper describes a Hidden Markov Model (HMM)-based method of automatic transcription of MIDI (Musical Instrument Digital Interface) signals of performed music. The problem is formulated as recognition of a given sequence of fluctuating note durations to find the most likely intended note sequence utilizing the modern continuous speech recognition technique. Combining a stochastic model of deviating note durations and a stochastic grammar representing possible sequences of notes, the maximum likelihood estimate of the note sequence is searched in terms of Viterbi algorithm. The same principle is successfully applied to a joint problem of bar line allocation, time measure recognition, and tempo estimation. Finally, durations of consecutive /spl eta/n notes are combined to form a "rhythm vector" representing tempo-free relative durations of the notes and treated in the same framework. Significant improvements compared with conventional "quantization" techniques are shown.