{"title":"从用户表达的动作中提取他们想要的细微差别:在四人动作中","authors":"T. Komatsu, Chihaya Kuwahara","doi":"10.1145/2735711.2735799","DOIUrl":null,"url":null,"abstract":"We propose a method for extracting users' intended nuances from their expressed quadruple movements. Specifically, this method can quantify such nuances as a four dimensional vector representation {sharpness, softness, dynamics, largeness}. We then show an example of a music application based on this method that changes the volume of assigned music tracks in accordance with each attribute of the vector representation extracted from their quadruple movements like a music conductor.","PeriodicalId":246615,"journal":{"name":"Proceedings of the 6th Augmented Human International Conference","volume":"33 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Extracting users' intended nuances from their expressed movements: in quadruple movements\",\"authors\":\"T. Komatsu, Chihaya Kuwahara\",\"doi\":\"10.1145/2735711.2735799\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We propose a method for extracting users' intended nuances from their expressed quadruple movements. Specifically, this method can quantify such nuances as a four dimensional vector representation {sharpness, softness, dynamics, largeness}. We then show an example of a music application based on this method that changes the volume of assigned music tracks in accordance with each attribute of the vector representation extracted from their quadruple movements like a music conductor.\",\"PeriodicalId\":246615,\"journal\":{\"name\":\"Proceedings of the 6th Augmented Human International Conference\",\"volume\":\"33 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2015-03-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 6th Augmented Human International Conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2735711.2735799\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 6th Augmented Human International Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2735711.2735799","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Extracting users' intended nuances from their expressed movements: in quadruple movements
We propose a method for extracting users' intended nuances from their expressed quadruple movements. Specifically, this method can quantify such nuances as a four dimensional vector representation {sharpness, softness, dynamics, largeness}. We then show an example of a music application based on this method that changes the volume of assigned music tracks in accordance with each attribute of the vector representation extracted from their quadruple movements like a music conductor.