为音乐和代码知识建模以支持共同创造的教育AI代理

J. Smith, E. Truesdell, Jason Freeman, Brian Magerko, K. Boyer, Tom McKlin
{"title":"为音乐和代码知识建模以支持共同创造的教育AI代理","authors":"J. Smith, E. Truesdell, Jason Freeman, Brian Magerko, K. Boyer, Tom McKlin","doi":"10.5281/ZENODO.4245386","DOIUrl":null,"url":null,"abstract":"EarSketch is an online environment for learning intro-ductory computing concepts through code-driven, sample-based music production. This paper details the design and implementation of a module to perform code and music analyses on projects on the EarSketch platform. This analysis module combines inputs in the form of symbolic metadata, audio feature analysis, and user code to produce com-prehensive models of user projects. The module performs a detailed analysis of the abstract syntax tree of a user’s code to model use of computational concepts. It uses music information retrieval (MIR) and symbolic metadata to analyze users’ musical design choices. These analyses produce a model containing users’ coding and musical deci-sions, as well as qualities of the algorithmic music created by those decisions. The models produced by this module will support future development of CAI, a Co-creative Artificial Intelligence. CAI is designed to collaborate with learners and promote increased competency and engagement with topics in the EarSketch curriculum. Our module combines code analysis and MIR to further the educational goals of CAI and EarSketch and to explore the application of multimodal analysis tools to education.","PeriodicalId":309903,"journal":{"name":"International Society for Music Information Retrieval Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Modeling Music and Code Knowledge to Support a Co-creative AI Agent for Education\",\"authors\":\"J. Smith, E. Truesdell, Jason Freeman, Brian Magerko, K. Boyer, Tom McKlin\",\"doi\":\"10.5281/ZENODO.4245386\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"EarSketch is an online environment for learning intro-ductory computing concepts through code-driven, sample-based music production. This paper details the design and implementation of a module to perform code and music analyses on projects on the EarSketch platform. This analysis module combines inputs in the form of symbolic metadata, audio feature analysis, and user code to produce com-prehensive models of user projects. The module performs a detailed analysis of the abstract syntax tree of a user’s code to model use of computational concepts. It uses music information retrieval (MIR) and symbolic metadata to analyze users’ musical design choices. These analyses produce a model containing users’ coding and musical deci-sions, as well as qualities of the algorithmic music created by those decisions. The models produced by this module will support future development of CAI, a Co-creative Artificial Intelligence. CAI is designed to collaborate with learners and promote increased competency and engagement with topics in the EarSketch curriculum. Our module combines code analysis and MIR to further the educational goals of CAI and EarSketch and to explore the application of multimodal analysis tools to education.\",\"PeriodicalId\":309903,\"journal\":{\"name\":\"International Society for Music Information Retrieval Conference\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-10-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Society for Music Information Retrieval Conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5281/ZENODO.4245386\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Society for Music Information Retrieval Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5281/ZENODO.4245386","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

EarSketch是一个在线环境,用于通过代码驱动、基于样本的音乐制作来学习入门级计算概念。本文详细介绍了一个在EarSketch平台上对项目进行代码和音乐分析的模块的设计和实现。该分析模块结合了符号元数据、音频特征分析和用户代码等形式的输入,以生成用户项目的综合模型。该模块对用户代码的抽象语法树进行详细分析,以模拟计算概念的使用。它利用音乐信息检索(MIR)和符号元数据分析用户的音乐设计选择。这些分析产生了一个包含用户编码和音乐决策的模型,以及由这些决策创建的算法音乐的质量。该模块生成的模型将支持CAI(一种共同创造的人工智能)的未来发展。CAI旨在与学习者合作,提高学习者的能力和对EarSketch课程主题的参与。我们的模块将代码分析和MIR结合起来,进一步实现CAI和EarSketch的教育目标,并探索多模态分析工具在教育中的应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Modeling Music and Code Knowledge to Support a Co-creative AI Agent for Education
EarSketch is an online environment for learning intro-ductory computing concepts through code-driven, sample-based music production. This paper details the design and implementation of a module to perform code and music analyses on projects on the EarSketch platform. This analysis module combines inputs in the form of symbolic metadata, audio feature analysis, and user code to produce com-prehensive models of user projects. The module performs a detailed analysis of the abstract syntax tree of a user’s code to model use of computational concepts. It uses music information retrieval (MIR) and symbolic metadata to analyze users’ musical design choices. These analyses produce a model containing users’ coding and musical deci-sions, as well as qualities of the algorithmic music created by those decisions. The models produced by this module will support future development of CAI, a Co-creative Artificial Intelligence. CAI is designed to collaborate with learners and promote increased competency and engagement with topics in the EarSketch curriculum. Our module combines code analysis and MIR to further the educational goals of CAI and EarSketch and to explore the application of multimodal analysis tools to education.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信