J. Smith, E. Truesdell, Jason Freeman, Brian Magerko, K. Boyer, Tom McKlin
{"title":"为音乐和代码知识建模以支持共同创造的教育AI代理","authors":"J. Smith, E. Truesdell, Jason Freeman, Brian Magerko, K. Boyer, Tom McKlin","doi":"10.5281/ZENODO.4245386","DOIUrl":null,"url":null,"abstract":"EarSketch is an online environment for learning intro-ductory computing concepts through code-driven, sample-based music production. This paper details the design and implementation of a module to perform code and music analyses on projects on the EarSketch platform. This analysis module combines inputs in the form of symbolic metadata, audio feature analysis, and user code to produce com-prehensive models of user projects. The module performs a detailed analysis of the abstract syntax tree of a user’s code to model use of computational concepts. It uses music information retrieval (MIR) and symbolic metadata to analyze users’ musical design choices. These analyses produce a model containing users’ coding and musical deci-sions, as well as qualities of the algorithmic music created by those decisions. The models produced by this module will support future development of CAI, a Co-creative Artificial Intelligence. CAI is designed to collaborate with learners and promote increased competency and engagement with topics in the EarSketch curriculum. Our module combines code analysis and MIR to further the educational goals of CAI and EarSketch and to explore the application of multimodal analysis tools to education.","PeriodicalId":309903,"journal":{"name":"International Society for Music Information Retrieval Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Modeling Music and Code Knowledge to Support a Co-creative AI Agent for Education\",\"authors\":\"J. Smith, E. Truesdell, Jason Freeman, Brian Magerko, K. Boyer, Tom McKlin\",\"doi\":\"10.5281/ZENODO.4245386\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"EarSketch is an online environment for learning intro-ductory computing concepts through code-driven, sample-based music production. This paper details the design and implementation of a module to perform code and music analyses on projects on the EarSketch platform. This analysis module combines inputs in the form of symbolic metadata, audio feature analysis, and user code to produce com-prehensive models of user projects. The module performs a detailed analysis of the abstract syntax tree of a user’s code to model use of computational concepts. It uses music information retrieval (MIR) and symbolic metadata to analyze users’ musical design choices. These analyses produce a model containing users’ coding and musical deci-sions, as well as qualities of the algorithmic music created by those decisions. The models produced by this module will support future development of CAI, a Co-creative Artificial Intelligence. CAI is designed to collaborate with learners and promote increased competency and engagement with topics in the EarSketch curriculum. Our module combines code analysis and MIR to further the educational goals of CAI and EarSketch and to explore the application of multimodal analysis tools to education.\",\"PeriodicalId\":309903,\"journal\":{\"name\":\"International Society for Music Information Retrieval Conference\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-10-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Society for Music Information Retrieval Conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5281/ZENODO.4245386\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Society for Music Information Retrieval Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5281/ZENODO.4245386","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Modeling Music and Code Knowledge to Support a Co-creative AI Agent for Education
EarSketch is an online environment for learning intro-ductory computing concepts through code-driven, sample-based music production. This paper details the design and implementation of a module to perform code and music analyses on projects on the EarSketch platform. This analysis module combines inputs in the form of symbolic metadata, audio feature analysis, and user code to produce com-prehensive models of user projects. The module performs a detailed analysis of the abstract syntax tree of a user’s code to model use of computational concepts. It uses music information retrieval (MIR) and symbolic metadata to analyze users’ musical design choices. These analyses produce a model containing users’ coding and musical deci-sions, as well as qualities of the algorithmic music created by those decisions. The models produced by this module will support future development of CAI, a Co-creative Artificial Intelligence. CAI is designed to collaborate with learners and promote increased competency and engagement with topics in the EarSketch curriculum. Our module combines code analysis and MIR to further the educational goals of CAI and EarSketch and to explore the application of multimodal analysis tools to education.