{"title":"使用基于感知的表示的音频源类型分割","authors":"K. Melih, R. González","doi":"10.1109/ISSPA.1999.818110","DOIUrl":null,"url":null,"abstract":"Existing audio retrieval systems fall into one of two categories: systems that can accept data of only a single type (e.g. automatic speech recognition systems) or systems that report to offer content based retrieval for audio data of any type. However, systems belonging to the latter category often impose the restriction that only one type of sound can be presented at a time. This requirement is reasonable since the interpretation of various audio qualities such as pitch and rhythm depends upon the audio type. Pitch variation, for example, can be interpreted as the melody line in music while in speech it can be used as a means for detecting change of speaker. The problem, however, is that existing systems either expect segmentation to have been performed a priori or perform the segmentation in a completely separate process. This introduces unnecessary processing and file manipulation overheads. To combat this, a new perceptually based representation has been developed specifically to support content-based retrieval. This paper discusses the application of the new representation to sound source segmentation and identification.","PeriodicalId":302569,"journal":{"name":"ISSPA '99. Proceedings of the Fifth International Symposium on Signal Processing and its Applications (IEEE Cat. No.99EX359)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1999-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"Audio source type segmentation using a perceptually based representation\",\"authors\":\"K. Melih, R. González\",\"doi\":\"10.1109/ISSPA.1999.818110\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Existing audio retrieval systems fall into one of two categories: systems that can accept data of only a single type (e.g. automatic speech recognition systems) or systems that report to offer content based retrieval for audio data of any type. However, systems belonging to the latter category often impose the restriction that only one type of sound can be presented at a time. This requirement is reasonable since the interpretation of various audio qualities such as pitch and rhythm depends upon the audio type. Pitch variation, for example, can be interpreted as the melody line in music while in speech it can be used as a means for detecting change of speaker. The problem, however, is that existing systems either expect segmentation to have been performed a priori or perform the segmentation in a completely separate process. This introduces unnecessary processing and file manipulation overheads. To combat this, a new perceptually based representation has been developed specifically to support content-based retrieval. This paper discusses the application of the new representation to sound source segmentation and identification.\",\"PeriodicalId\":302569,\"journal\":{\"name\":\"ISSPA '99. Proceedings of the Fifth International Symposium on Signal Processing and its Applications (IEEE Cat. No.99EX359)\",\"volume\":\"43 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1999-08-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ISSPA '99. Proceedings of the Fifth International Symposium on Signal Processing and its Applications (IEEE Cat. No.99EX359)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISSPA.1999.818110\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISSPA '99. Proceedings of the Fifth International Symposium on Signal Processing and its Applications (IEEE Cat. No.99EX359)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISSPA.1999.818110","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Audio source type segmentation using a perceptually based representation
Existing audio retrieval systems fall into one of two categories: systems that can accept data of only a single type (e.g. automatic speech recognition systems) or systems that report to offer content based retrieval for audio data of any type. However, systems belonging to the latter category often impose the restriction that only one type of sound can be presented at a time. This requirement is reasonable since the interpretation of various audio qualities such as pitch and rhythm depends upon the audio type. Pitch variation, for example, can be interpreted as the melody line in music while in speech it can be used as a means for detecting change of speaker. The problem, however, is that existing systems either expect segmentation to have been performed a priori or perform the segmentation in a completely separate process. This introduces unnecessary processing and file manipulation overheads. To combat this, a new perceptually based representation has been developed specifically to support content-based retrieval. This paper discusses the application of the new representation to sound source segmentation and identification.