{"title":"基于手势的物理建模声音合成控制:通过演示映射的方法","authors":"Jules Françoise, Norbert Schnell, Frédéric Bevilacqua","doi":"10.1145/2502081.2502262","DOIUrl":null,"url":null,"abstract":"We address the issue of mapping between gesture and sound for gesture-based control of physical modeling sound synthesis. We propose an approach called mapping by demonstration, allowing users to design the mapping by performing gestures while listening to sound examples. The system is based on a multimodal model able to learn the relationships between gestures and sounds.","PeriodicalId":20448,"journal":{"name":"Proceedings of the 21st ACM international conference on Multimedia","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2013-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Gesture-based control of physical modeling sound synthesis: a mapping-by-demonstration approach\",\"authors\":\"Jules Françoise, Norbert Schnell, Frédéric Bevilacqua\",\"doi\":\"10.1145/2502081.2502262\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We address the issue of mapping between gesture and sound for gesture-based control of physical modeling sound synthesis. We propose an approach called mapping by demonstration, allowing users to design the mapping by performing gestures while listening to sound examples. The system is based on a multimodal model able to learn the relationships between gestures and sounds.\",\"PeriodicalId\":20448,\"journal\":{\"name\":\"Proceedings of the 21st ACM international conference on Multimedia\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-10-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 21st ACM international conference on Multimedia\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2502081.2502262\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 21st ACM international conference on Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2502081.2502262","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Gesture-based control of physical modeling sound synthesis: a mapping-by-demonstration approach
We address the issue of mapping between gesture and sound for gesture-based control of physical modeling sound synthesis. We propose an approach called mapping by demonstration, allowing users to design the mapping by performing gestures while listening to sound examples. The system is based on a multimodal model able to learn the relationships between gestures and sounds.