Mehak Maniktala, Chris Miller, Aaron Margolese-Malin, A. Jhala, Chris Martens
{"title":"M.I.N.U.E.T.: Procedural Musical Accompaniment for Textual Narratives","authors":"Mehak Maniktala, Chris Miller, Aaron Margolese-Malin, A. Jhala, Chris Martens","doi":"10.1145/3402942.3409602","DOIUrl":null,"url":null,"abstract":"Extensive research has been conducted on using procedural music generation in real-time applications such as accompaniment to musicians, visual narratives, and games. However, less attention has been paid to the enhancement of textual narratives through music. In this paper, we present Mood Into Note Using Extracted Text (MINUET), a novel system that can procedurally generate music for textual narrative segments using sentiment analysis. Textual analysis of the flow and sentiment derived from the text is used as input to condition accompanying music. Music generation systems have addressed variations through changes in sentiment. By using an ensemble predictor model to classify sentences as belonging to particular emotions, MINUET generates text-accompanying music with the goal of enhancing a reader’s experience beyond the limits of the author’s words. Music is played via the JMusic library and a set of Markov chains specific to each emotion with mood classifications evaluated via stratified 10-fold cross validation. The development of MINUET affords the reflection and analysis of features that affect the quality of generated musical accompaniment for text. It also serves as a sandbox for further evaluating sentiment-based systems on both text and music generation sides in a coherent experience of an implemented and extendable experiential artifact.","PeriodicalId":421754,"journal":{"name":"Proceedings of the 15th International Conference on the Foundations of Digital Games","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 15th International Conference on the Foundations of Digital Games","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3402942.3409602","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Extensive research has been conducted on using procedural music generation in real-time applications such as accompaniment to musicians, visual narratives, and games. However, less attention has been paid to the enhancement of textual narratives through music. In this paper, we present Mood Into Note Using Extracted Text (MINUET), a novel system that can procedurally generate music for textual narrative segments using sentiment analysis. Textual analysis of the flow and sentiment derived from the text is used as input to condition accompanying music. Music generation systems have addressed variations through changes in sentiment. By using an ensemble predictor model to classify sentences as belonging to particular emotions, MINUET generates text-accompanying music with the goal of enhancing a reader’s experience beyond the limits of the author’s words. Music is played via the JMusic library and a set of Markov chains specific to each emotion with mood classifications evaluated via stratified 10-fold cross validation. The development of MINUET affords the reflection and analysis of features that affect the quality of generated musical accompaniment for text. It also serves as a sandbox for further evaluating sentiment-based systems on both text and music generation sides in a coherent experience of an implemented and extendable experiential artifact.