{"title":"基于人体动作指令识别的交互式声音生成系统","authors":"Hotaka Kitabora, Y. Maeda, Yasutake Takahashi","doi":"10.1109/SNPD.2012.54","DOIUrl":null,"url":null,"abstract":"Recently, the interactive sound is researched in the field of an artificial life or Kansei engineering as one of interactive arts. An interactive sound changes the aspect variously by the interaction of human and a system, and it aims at generating the sound which was full of the complexity and diversity beyond expectations of human. In our laboratory, the chaotic sound generation system which shows various aspects has been developed by using the Globally Coupled Map (GCM) which puts plural chaos elements in order and operates each chaos and the whole synchronicity. However, in the present system, what we can operate is only a chaos parameter. Therefore this is the system which it is hard to reflect the human's intention in the generated sound. So, in this research, the system which analyzes human motions and instructions with an external camera and defines the directivity of the output sound is proposed. Motion capture system detects coordinates of human's joint. We designed the sound generation system to which the element of sound pitch, length, volume, and tonality are changed by the detected motion. The intention of desired sound by human is able to be reflected to the system, and we can expect to realize the large interaction effect. Moreover, it is also reported by Kansei evaluation to confirm the efficiency of the designed system.","PeriodicalId":387936,"journal":{"name":"2012 13th ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2012-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Interactive Sound Generation System Based on Instruction Recognition of Human Body Motion\",\"authors\":\"Hotaka Kitabora, Y. Maeda, Yasutake Takahashi\",\"doi\":\"10.1109/SNPD.2012.54\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recently, the interactive sound is researched in the field of an artificial life or Kansei engineering as one of interactive arts. An interactive sound changes the aspect variously by the interaction of human and a system, and it aims at generating the sound which was full of the complexity and diversity beyond expectations of human. In our laboratory, the chaotic sound generation system which shows various aspects has been developed by using the Globally Coupled Map (GCM) which puts plural chaos elements in order and operates each chaos and the whole synchronicity. However, in the present system, what we can operate is only a chaos parameter. Therefore this is the system which it is hard to reflect the human's intention in the generated sound. So, in this research, the system which analyzes human motions and instructions with an external camera and defines the directivity of the output sound is proposed. Motion capture system detects coordinates of human's joint. We designed the sound generation system to which the element of sound pitch, length, volume, and tonality are changed by the detected motion. The intention of desired sound by human is able to be reflected to the system, and we can expect to realize the large interaction effect. Moreover, it is also reported by Kansei evaluation to confirm the efficiency of the designed system.\",\"PeriodicalId\":387936,\"journal\":{\"name\":\"2012 13th ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-08-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2012 13th ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SNPD.2012.54\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 13th ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SNPD.2012.54","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Interactive Sound Generation System Based on Instruction Recognition of Human Body Motion
Recently, the interactive sound is researched in the field of an artificial life or Kansei engineering as one of interactive arts. An interactive sound changes the aspect variously by the interaction of human and a system, and it aims at generating the sound which was full of the complexity and diversity beyond expectations of human. In our laboratory, the chaotic sound generation system which shows various aspects has been developed by using the Globally Coupled Map (GCM) which puts plural chaos elements in order and operates each chaos and the whole synchronicity. However, in the present system, what we can operate is only a chaos parameter. Therefore this is the system which it is hard to reflect the human's intention in the generated sound. So, in this research, the system which analyzes human motions and instructions with an external camera and defines the directivity of the output sound is proposed. Motion capture system detects coordinates of human's joint. We designed the sound generation system to which the element of sound pitch, length, volume, and tonality are changed by the detected motion. The intention of desired sound by human is able to be reflected to the system, and we can expect to realize the large interaction effect. Moreover, it is also reported by Kansei evaluation to confirm the efficiency of the designed system.