Interactive Sound Generation System Based on Instruction Recognition of Human Body Motion

Hotaka Kitabora, Y. Maeda, Yasutake Takahashi
{"title":"Interactive Sound Generation System Based on Instruction Recognition of Human Body Motion","authors":"Hotaka Kitabora, Y. Maeda, Yasutake Takahashi","doi":"10.1109/SNPD.2012.54","DOIUrl":null,"url":null,"abstract":"Recently, the interactive sound is researched in the field of an artificial life or Kansei engineering as one of interactive arts. An interactive sound changes the aspect variously by the interaction of human and a system, and it aims at generating the sound which was full of the complexity and diversity beyond expectations of human. In our laboratory, the chaotic sound generation system which shows various aspects has been developed by using the Globally Coupled Map (GCM) which puts plural chaos elements in order and operates each chaos and the whole synchronicity. However, in the present system, what we can operate is only a chaos parameter. Therefore this is the system which it is hard to reflect the human's intention in the generated sound. So, in this research, the system which analyzes human motions and instructions with an external camera and defines the directivity of the output sound is proposed. Motion capture system detects coordinates of human's joint. We designed the sound generation system to which the element of sound pitch, length, volume, and tonality are changed by the detected motion. The intention of desired sound by human is able to be reflected to the system, and we can expect to realize the large interaction effect. Moreover, it is also reported by Kansei evaluation to confirm the efficiency of the designed system.","PeriodicalId":387936,"journal":{"name":"2012 13th ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2012-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 13th ACIS International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SNPD.2012.54","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Recently, the interactive sound is researched in the field of an artificial life or Kansei engineering as one of interactive arts. An interactive sound changes the aspect variously by the interaction of human and a system, and it aims at generating the sound which was full of the complexity and diversity beyond expectations of human. In our laboratory, the chaotic sound generation system which shows various aspects has been developed by using the Globally Coupled Map (GCM) which puts plural chaos elements in order and operates each chaos and the whole synchronicity. However, in the present system, what we can operate is only a chaos parameter. Therefore this is the system which it is hard to reflect the human's intention in the generated sound. So, in this research, the system which analyzes human motions and instructions with an external camera and defines the directivity of the output sound is proposed. Motion capture system detects coordinates of human's joint. We designed the sound generation system to which the element of sound pitch, length, volume, and tonality are changed by the detected motion. The intention of desired sound by human is able to be reflected to the system, and we can expect to realize the large interaction effect. Moreover, it is also reported by Kansei evaluation to confirm the efficiency of the designed system.
基于人体动作指令识别的交互式声音生成系统
近年来,交互声音作为交互艺术的一种,在人工生命或感性工学领域得到了广泛的研究。交互声音是通过人与系统的交互作用而产生多方面的变化,其目的是产生具有超出人类预期的复杂性和多样性的声音。本实验室利用全局耦合图(global Coupled Map, GCM)对多个混沌元素进行排序,并对每个混沌和整体同步性进行操作,开发了具有多个方面的混沌声产生系统。然而,在目前的系统中,我们所能操作的只是一个混沌参数。因此,这是一个很难在产生的声音中反映人的意图的系统。因此,在本研究中,提出了一种利用外部摄像头分析人体动作和指令并确定输出声音方向性的系统。动作捕捉系统检测人体关节的坐标。我们设计了声音产生系统,通过检测到的运动改变声音的音高、长度、音量和音调等元素。人想要的声音的意图能够反映到系统中,我们可以期望实现较大的交互效果。此外,还报告了感性评价,以确认所设计系统的效率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信