聆听作曲:一种用于创作情感音乐的计算机辅助系统

L. Quinto, W. Thompson
{"title":"聆听作曲:一种用于创作情感音乐的计算机辅助系统","authors":"L. Quinto, W. Thompson","doi":"10.4018/jse.2012070103","DOIUrl":null,"url":null,"abstract":"Most people communicate emotion through their voice, facial expressions, and gestures. However, it is assumed that only \"experts\" can communicate emotions in music. The authors have developed a computer-based system that enables musically untrained users to select relevant acoustic attributes to compose emotional melodies. Nonmusicians Experiment 1 and musicians Experiment 3 were progressively presented with pairs of melodies that each differed in an acoustic attribute e.g., intensity-loud vs. soft. For each pair, participants chose the melody that most strongly conveyed a target emotion anger, fear, happiness, sadness or tenderness. Once all decisions were made, a final melody containing all choices was generated. The system allowed both untrained and trained participants to compose a range of emotional melodies. New listeners successfully decoded the emotional melodies of nonmusicians Experiment 2 and musicians Experiment 4. Results indicate that human-computer interaction can facilitate the composition of emotional music by musically untrained and trained individuals.","PeriodicalId":272943,"journal":{"name":"Int. J. Synth. Emot.","volume":"4 6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"Composing by Listening: A Computer-Assisted System for Creating Emotional Music\",\"authors\":\"L. Quinto, W. Thompson\",\"doi\":\"10.4018/jse.2012070103\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Most people communicate emotion through their voice, facial expressions, and gestures. However, it is assumed that only \\\"experts\\\" can communicate emotions in music. The authors have developed a computer-based system that enables musically untrained users to select relevant acoustic attributes to compose emotional melodies. Nonmusicians Experiment 1 and musicians Experiment 3 were progressively presented with pairs of melodies that each differed in an acoustic attribute e.g., intensity-loud vs. soft. For each pair, participants chose the melody that most strongly conveyed a target emotion anger, fear, happiness, sadness or tenderness. Once all decisions were made, a final melody containing all choices was generated. The system allowed both untrained and trained participants to compose a range of emotional melodies. New listeners successfully decoded the emotional melodies of nonmusicians Experiment 2 and musicians Experiment 4. Results indicate that human-computer interaction can facilitate the composition of emotional music by musically untrained and trained individuals.\",\"PeriodicalId\":272943,\"journal\":{\"name\":\"Int. J. Synth. Emot.\",\"volume\":\"4 6 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Int. J. Synth. Emot.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.4018/jse.2012070103\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Int. J. Synth. Emot.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4018/jse.2012070103","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

摘要

大多数人通过声音、面部表情和手势来交流情感。然而,人们认为只有“专家”才能在音乐中表达情感。作者开发了一种基于计算机的系统,使未受过音乐训练的用户能够选择相关的声学属性来创作情感旋律。非音乐家的实验1和音乐家的实验3是逐步呈现的旋律对,每个旋律在声学属性上都不同,例如,响亮的强度与柔和的强度。对于每一组,参与者都选择了最能传达目标情绪的旋律——愤怒、恐惧、快乐、悲伤或温柔。一旦做出所有决定,包含所有选择的最终旋律就会产生。该系统允许未经训练和训练的参与者创作一系列情感旋律。新听众成功地破译了非音乐家实验2和音乐家实验4的情感旋律。结果表明,人机交互可以促进未受过音乐训练和受过音乐训练的个人创作情感音乐。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Composing by Listening: A Computer-Assisted System for Creating Emotional Music
Most people communicate emotion through their voice, facial expressions, and gestures. However, it is assumed that only "experts" can communicate emotions in music. The authors have developed a computer-based system that enables musically untrained users to select relevant acoustic attributes to compose emotional melodies. Nonmusicians Experiment 1 and musicians Experiment 3 were progressively presented with pairs of melodies that each differed in an acoustic attribute e.g., intensity-loud vs. soft. For each pair, participants chose the melody that most strongly conveyed a target emotion anger, fear, happiness, sadness or tenderness. Once all decisions were made, a final melody containing all choices was generated. The system allowed both untrained and trained participants to compose a range of emotional melodies. New listeners successfully decoded the emotional melodies of nonmusicians Experiment 2 and musicians Experiment 4. Results indicate that human-computer interaction can facilitate the composition of emotional music by musically untrained and trained individuals.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信