{"title":"基于情绪的语音音乐选择","authors":"G. Schatter, Paul Kramer","doi":"10.1109/ISCE.2011.5973826","DOIUrl":null,"url":null,"abstract":"We present in this paper a prototype, in which mood-based file selection and speech-based control are integrated into an interactive user interface for browsing in music collections and generating playlists. This bimodal approach based on a questionnaire with the mood-dependent listening habits of 118 subjects. Three new metaphors were proposed and tested for the speech-based navigation through a two-dimensional acoustic landscape.","PeriodicalId":384027,"journal":{"name":"2011 4th International Conference on Human System Interactions, HSI 2011","volume":"110 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Mood-based selection of music collections by speech\",\"authors\":\"G. Schatter, Paul Kramer\",\"doi\":\"10.1109/ISCE.2011.5973826\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We present in this paper a prototype, in which mood-based file selection and speech-based control are integrated into an interactive user interface for browsing in music collections and generating playlists. This bimodal approach based on a questionnaire with the mood-dependent listening habits of 118 subjects. Three new metaphors were proposed and tested for the speech-based navigation through a two-dimensional acoustic landscape.\",\"PeriodicalId\":384027,\"journal\":{\"name\":\"2011 4th International Conference on Human System Interactions, HSI 2011\",\"volume\":\"110 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2011-05-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2011 4th International Conference on Human System Interactions, HSI 2011\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISCE.2011.5973826\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 4th International Conference on Human System Interactions, HSI 2011","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCE.2011.5973826","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Mood-based selection of music collections by speech
We present in this paper a prototype, in which mood-based file selection and speech-based control are integrated into an interactive user interface for browsing in music collections and generating playlists. This bimodal approach based on a questionnaire with the mood-dependent listening habits of 118 subjects. Three new metaphors were proposed and tested for the speech-based navigation through a two-dimensional acoustic landscape.