{"title":"Euterpe: A Web Framework for Interactive Music Systems","authors":"Yongyi Zang, Christodoulos Benetatos, Zhiyao Duan","doi":"10.17743/jaes.2022.0117","DOIUrl":"https://doi.org/10.17743/jaes.2022.0117","url":null,"abstract":"","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2023-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139269177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Distributing Generative Music With Alternator","authors":"Ian Clester, Jason Freeman","doi":"10.17743/jaes.2022.0113","DOIUrl":"https://doi.org/10.17743/jaes.2022.0113","url":null,"abstract":"Computers are a powerful technology for music playback: as general-purpose computing machines with capabilities beyond the fixed-recording playback devices of the past, they can play generative music with multiple outcomes or computational compositions that are not fully determined until they are played. However, there is no suitable platform for distributing generative music while preserving the spaces of possible outputs. This absence hinders composers’ and listeners’ access to the possibilities of computational playback. In this paper, the authors address the problem of distributing generative music. They present a) a dynamic format for bundling computational compositions with static assets in self-contained packages and b) a music player for finding, fetching, and playing/executing these compositions. These tools are built for generality to support a variety of approaches to making music with code and remain language-agnostic. The authors take advantage of WebAssembly and related tools to enable the use of general-purpose languages such as C, Rust, JavaScript, and Python and audio languages such as Pure Data, RTcmix, Csound, and ChucK. They use AudioWorklets and Web Workers to enable scalable distribution via client-side playback. And they present the user with a music player interface that aims to be familiar while exposing the possibilities of generative music.","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2023-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139266711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Rocking the Web With Browser-Based Simulations of Tube Guitar Amplifiers","authors":"Michel Buffa, Jerome Lebrun","doi":"10.17743/jaes.2022.0110","DOIUrl":"https://doi.org/10.17743/jaes.2022.0110","url":null,"abstract":"","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2023-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139268047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Web Audio API as a Standardized Interface Beyond Web Browsers","authors":"Benjamin Matuszewski, Otto Rottier","doi":"10.17743/jaes.2022.0114","DOIUrl":"https://doi.org/10.17743/jaes.2022.0114","url":null,"abstract":"In this paper, the authors present two related libraries, web-audio-api-rs and node-web-audio-api , that provide a solution for using the Web Audio API outside the Web browsers. The first project is a low-level implementation of the Web Audio API written in the Rust language, and the second provides bindings of the core Rust library for the Node.js platform. The authors’ approach here is to consider Web standards and specifications as tools for defining standardized APIs across different environments and languages, which they believe could benefit the audio community in a more general manner. Although such a proposition presents some portability limitations due to the differences between languages, the authors think it nevertheless opens up new possibilities in sharing documentation, resources, and components across a wide range of environments, platforms, and users. The paper first describes the general design and implementation of the authors’ libraries. Then, it presents somebenchmarksoftheselibrariesagainststate-of-the-artimplementationfromWebbrowsers, andtheperformanceimprovementsthathavebeenmadeoverthelastyear.Finally,itdiscussesthecurrentknownlimitationsoftheselibrariesandproposessomedirectionsforfuturework. Thetwoprojectsareopen-source,reasonablyfeature-complete,andreadytouseinproductionapplications.","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2023-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139268349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hack the Show: Design and Analysis of Three Interaction Modes for Audience Participation","authors":"Matthias Jung, Ian Clester","doi":"10.17743/jaes.2022.0111","DOIUrl":"https://doi.org/10.17743/jaes.2022.0111","url":null,"abstract":"","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2023-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139267825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Damian T. Dziwis, Henrik von Coler, Christoph Pörschmann
{"title":"Orchestra: A Toolbox for Live Music Performances in a Web-Based Metaverse","authors":"Damian T. Dziwis, Henrik von Coler, Christoph Pörschmann","doi":"10.17743/jaes.2022.0096","DOIUrl":"https://doi.org/10.17743/jaes.2022.0096","url":null,"abstract":"As the potential of networked multiuser virtual environments increases under the concept of the metaverse, so do the interest and artistic possibilities of using them for live music performances. Live performances in online metaverse environments offer an easy and environmentally friendly way to bring together artists and audiences from all over the world. Virtualization also enables countless possibilities for designing and creating artistic experiences and new performance practices. For many years, live performances have been established on various virtual platforms, which differ significantly in terms of possible performance practices, user interaction, immersion, and usability. With Orchestra, we are developing an open-source toolbox that uses the Web Audio Application Programming Interface to realize live performances with various performance practices for web-based metaverse environments. Possibilities vary from live streaming of volumetric audio and video, live coding in multiple (including audiovisual) programming languages, to performing with generative algorithms or virtual instruments developed in Pure Data. These can be combined in various ways and also be used for telematic/networked music ensembles, interactive virtual installations, or novel performance concepts. In this paper, we describe the development and scope of the Orchestra toolbox, as well as use cases that illustrate the artistic possibilities.","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2023-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139267528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DDSP-Piano: A Neural Sound Synthesizer Informed by Instrument Knowledge","authors":"Lenny Renault, Rémi Mignot, Axel Roebel","doi":"10.17743/jaes.2022.0102","DOIUrl":"https://doi.org/10.17743/jaes.2022.0102","url":null,"abstract":"Instrument sound synthesis using deep neural networks has received numerous improvements over the last couple of years. Among them, the Differentiable Digital Signal Processing (DDSP) framework has modernized the spectral modeling paradigm by including signal-based synthesizers and effects into fully differentiable architectures. The present work extends the applications of DDSP to the task of polyphonic sound synthesis, with the proposal of a differentiable piano synthesizer conditioned on MIDI inputs. The model architecture is motivated by high-level acoustic modeling knowledge of the instrument, which, along with the sound structure priors inherent to the DDSP components, makes for a lightweight, interpretable, and realistic-sounding piano model. A subjective listening test has revealed that the proposed approach achieves better sound quality than a state-of-the-art neural-based piano synthesizer, but physical-modeling-based models still hold the best quality. Leveraging its interpretability and modularity, a qualitative analysis of the model behavior was also conducted: it highlights where additional modeling knowledge and optimization procedures could be inserted in order to improve the synthesis quality and the manipulation of sound properties. Eventually, the proposed differentiable synthesizer can be further used with other deep learning models for alternative musical tasks handling polyphonic audio and symbolic data.","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135886025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cyrus Vahidi, Han Han, Changhong Wang, Mathieu Lagrange, György Fazekas, Vincent Lostanlen
{"title":"Mesostructures: Beyond Spectrogram Loss in Differentiable Time–Frequency Analysis","authors":"Cyrus Vahidi, Han Han, Changhong Wang, Mathieu Lagrange, György Fazekas, Vincent Lostanlen","doi":"10.17743/jaes.2022.0103","DOIUrl":"https://doi.org/10.17743/jaes.2022.0103","url":null,"abstract":"","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135786439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reverse Engineering a Nonlinear Mix of a Multitrack Recording","authors":"Joseph Colone, Joshua Reiss","doi":"10.17743/jaes.2022.0105","DOIUrl":"https://doi.org/10.17743/jaes.2022.0105","url":null,"abstract":"","PeriodicalId":50008,"journal":{"name":"Journal of the Audio Engineering Society","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135885875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}