{"title":"Composing by Listening: A Computer-Assisted System for Creating Emotional Music","authors":"L. Quinto, W. Thompson","doi":"10.4018/jse.2012070103","DOIUrl":null,"url":null,"abstract":"Most people communicate emotion through their voice, facial expressions, and gestures. However, it is assumed that only \"experts\" can communicate emotions in music. The authors have developed a computer-based system that enables musically untrained users to select relevant acoustic attributes to compose emotional melodies. Nonmusicians Experiment 1 and musicians Experiment 3 were progressively presented with pairs of melodies that each differed in an acoustic attribute e.g., intensity-loud vs. soft. For each pair, participants chose the melody that most strongly conveyed a target emotion anger, fear, happiness, sadness or tenderness. Once all decisions were made, a final melody containing all choices was generated. The system allowed both untrained and trained participants to compose a range of emotional melodies. New listeners successfully decoded the emotional melodies of nonmusicians Experiment 2 and musicians Experiment 4. Results indicate that human-computer interaction can facilitate the composition of emotional music by musically untrained and trained individuals.","PeriodicalId":272943,"journal":{"name":"Int. J. Synth. Emot.","volume":"4 6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Int. J. Synth. Emot.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4018/jse.2012070103","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9
Abstract
Most people communicate emotion through their voice, facial expressions, and gestures. However, it is assumed that only "experts" can communicate emotions in music. The authors have developed a computer-based system that enables musically untrained users to select relevant acoustic attributes to compose emotional melodies. Nonmusicians Experiment 1 and musicians Experiment 3 were progressively presented with pairs of melodies that each differed in an acoustic attribute e.g., intensity-loud vs. soft. For each pair, participants chose the melody that most strongly conveyed a target emotion anger, fear, happiness, sadness or tenderness. Once all decisions were made, a final melody containing all choices was generated. The system allowed both untrained and trained participants to compose a range of emotional melodies. New listeners successfully decoded the emotional melodies of nonmusicians Experiment 2 and musicians Experiment 4. Results indicate that human-computer interaction can facilitate the composition of emotional music by musically untrained and trained individuals.