Daniel Chin, Yian Zhang, Tianyu Zhang, Jake Zhao, Gus G. Xia
{"title":"Interactive Rainbow Score: A Visual-centered Multimodal Flute Tutoring System","authors":"Daniel Chin, Yian Zhang, Tianyu Zhang, Jake Zhao, Gus G. Xia","doi":"10.5281/zenodo.4813324","DOIUrl":"https://doi.org/10.5281/zenodo.4813324","url":null,"abstract":"Learning to play an instrument is intrinsically multimodal, and we have seen a trend of applying visual and haptic feedback in music games and computer-aided music tutoring systems. However, most current systems are still designed to master individual pieces of music; it is unclear how well the learned skills can be generalized to new pieces. We aim to explore this question. In this study, we contribute Interactive Rainbow Score, an interactive visual system to boost the learning of sight-playing, the general musical skill to read music and map the visual representations to performance motions. The key design of Interactive Rainbow Score is to associate pitches (and the corresponding motions) with colored notation and further strengthen such association via real-time interactions. Quantitative results show that the interactive feature on average increases the learning efficiency by 31.1%. Further analysis indicates that it is critical to apply the interaction in the early period of learning.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124232736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Raul Masu, Paulo Bala, M. A. Ahmad, N. Correia, Valentina Nisi, N. Nunes, T. Romão
{"title":"VR Open Scores: Scores as Inspiration for VR Scenarios","authors":"Raul Masu, Paulo Bala, M. A. Ahmad, N. Correia, Valentina Nisi, N. Nunes, T. Romão","doi":"10.5281/zenodo.4813262","DOIUrl":"https://doi.org/10.5281/zenodo.4813262","url":null,"abstract":"In this paper, we introduce the concept of VR Open Scores: aleatoric score-based virtual scenarios where an aleatoric score is embedded in a virtual environment. This idea builds upon the notion of graphic scores and composed in- strument, and apply them in a new context. Our proposal also explores possible parallels between open meaning in interaction design, and aleatoric score, conceptualized as Open Work by the Italian philosopher Umberto Eco. Our approach has two aims. The first aim is to create an envi- ronment where users can immerse themselves in the visual elements of a score while listening to the corresponding mu- sic. The second aim is to facilitate users to develop a per- sonal relationship with both the system and the score. To achieve those aims, as a practical implementation of our proposed concept, we developed two immersive scenarios: a 360o video and an interactive space. We conclude pre- senting how our design aims were accomplished in the two scenarios, and describing positive and negative elements of our implementations.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116317382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reflections on Eight Years of Instrument Creation with Machine Learning","authors":"R. Fiebrink, Laetitia Sonami","doi":"10.5281/zenodo.4813334","DOIUrl":"https://doi.org/10.5281/zenodo.4813334","url":null,"abstract":"Machine learning (ML) has been used to create mappings for digital musical instruments for over twenty-five years, and numerous ML toolkits have been developed for the NIME community. However, little published work has studied how ML has been used in sustained instrument building and performance practices. This paper examines the experiences of instrument builder and performer Laetitia Sonami, who has been using ML to build and refine her Spring Spyre instrument since 2012. Using Sonami’s current practice as a case study, this paper explores the utility, opportunities, and challenges involved in using ML in practice over many years. This paper also reports the perspective of Rebecca Fiebrink, the creator of the Wekinator ML tool used by Sonami, revealing how her work with Sonami has led to changes to the software and to her teaching. This paper thus contributes a deeper understanding of the value of ML for NIME practitioners, and it can inform design considerations for future ML toolkits as well as NIME pedagogy. Further, it provides new perspectives on familiar NIME conversations about mapping strategies, expressivity, and control, informed by a dedicated practice over many years.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124765677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Richard J. Savery, Benjamin Genchel, Jason Smith, Anthony Caulkins, Molly Jones, A. Savery
{"title":"Learning from History: Recreating and Repurposing Harriet Padberg's Computer Composed Canon and Free Fugue","authors":"Richard J. Savery, Benjamin Genchel, Jason Smith, Anthony Caulkins, Molly Jones, A. Savery","doi":"10.5281/zenodo.3673021","DOIUrl":"https://doi.org/10.5281/zenodo.3673021","url":null,"abstract":"Harriet Padberg wrote Computer-Composed Canon and Free Fugue as part of her 1964 dissertation in Mathematics and Music at Saint Louis University. This program is one of the earliest examples of text-to-music software and algorithmic composition, which are areas of great interest in the present-day field of music technology. This paper aims to analyze the technological innovation, aesthetic design process, and impact of Harriet Padberg's original 1964 thesis as well as the design of a modern recreation and utilization, in order to gain insight to the nature of revisiting older works. Here, we present our open source recreation of Padberg's program with a modern interface and, through its use as an artistic tool by three composers, show how historical works can be effectively used for new creative purposes in contemporary contexts. Not Even One by Molly Jones draws on the historical and social significance of Harriet Padberg through using her program in a piece about the lack of representation of women judges in composition competitions. Brevity by Anna Savery utilizes the original software design as a composition tool, and The Padberg Piano by Anthony Caulkins uses the melodic generation of the original to create a software instrument.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122711358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Atau Tanaka, Balandino Di Donato, Michael Zbyszynski, G. Roks
{"title":"Designing Gestures for Continuous Sonic Interaction","authors":"Atau Tanaka, Balandino Di Donato, Michael Zbyszynski, G. Roks","doi":"10.5281/zenodo.3672916","DOIUrl":"https://doi.org/10.5281/zenodo.3672916","url":null,"abstract":"We present a system that allows users to try different ways to train neural networks and temporal modelling to asso- ciate gestures with time-varying sound. We created a soft- ware framework for this and evaluated it in a workshop- based study. We build upon research in sound tracing and mapping-by-demonstration to ask participants to de- sign gestures for performing time-varying sounds using a multimodal, inertial measurement (IMU) and muscle sens- ing (EMG) device. We presented the user with two classical techniques from the literature, Static Position regression and Hidden Markov based temporal modelling, and pro- pose a new technique for capturing gesture anchor points on the fly as training data for neural network based regression, called Windowed Regression. Our results show trade- offs between accurate, predictable reproduction of source sounds and exploration of the gesture-sound space. Several users were attracted to our windowed regression technique. This paper will be of interest to musicians engaged in going from sound design to gesture design and offers a workflow for interactive machine learning.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121406961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reach: a keyboard-based gesture recognition system for live piano sound modulation","authors":"Niccolò Granieri, J. Dooley","doi":"10.5281/zenodo.3673000","DOIUrl":"https://doi.org/10.5281/zenodo.3673000","url":null,"abstract":"This paper presents Reach, a keyboard-based gesture recog- nition system for live piano sound modulation. Reach is a system built using the Leap Motion Orion SDK, Pure Data and a custom C++ OSC mapper1. It provides control over the sound modulation of an acoustic piano using the pi- anist’s ancillary gestures. \u0000 \u0000The system was developed using an iterative design pro- cess, incorporating research findings from two user studies and several case studies. The results that emerged show the potential of recognising and utilising the pianist’s existing technique when designing keyboard-based DMIs, reducing the requirement to learn additional techniques.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129190798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David Antonio Gómez Jáuregui, Irvin Dongo, N. Couture
{"title":"Automatic Recognition of Soundpainting for the Generation of Electronic Music Sounds","authors":"David Antonio Gómez Jáuregui, Irvin Dongo, N. Couture","doi":"10.5281/zenodo.3672866","DOIUrl":"https://doi.org/10.5281/zenodo.3672866","url":null,"abstract":"This work aims to explore the use of a new gesture-based interaction built on automatic recognition of Soundpainting structured gestural language. In the proposed approach, a composer (called Soundpainter) performs Soundpainting gestures facing a Kinect sensor (Microsoft). Then, a gesture recognition system captures gestures that are sent to a sound generator software. The proposed method was used to stage an artistic show in which a Soundpainter had to improvise with 6 different gestures to generate a musical composition from different sounds in real time. The accuracy of the gesture recognition system was evaluated as well as Soundpainter's user experience. In addition, a user evaluation study for using our proposed system in a learning context was also conducted. Current results open up perspectives for the design of new artistic expressions based on the use of automatic gestural recognition supported by Soundpainting language.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122400156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
O. Bown, Angelo Fraietta, Sam Ferguson, L. Loke, Liam Bray
{"title":"Facilitating Creative Exploratory Search with Multiple Networked Audio Devices Using HappyBrackets","authors":"O. Bown, Angelo Fraietta, Sam Ferguson, L. Loke, Liam Bray","doi":"10.5281/zenodo.3672962","DOIUrl":"https://doi.org/10.5281/zenodo.3672962","url":null,"abstract":"We present HappyBrackets, an audio-focused creative coding toolkit for deploying music programs to remote networked devices. It is designed to support efficient creative exploratory search in the context of the Internet of Things (IoT), where one or more devices must be configured, programmed and interact over a network, with applications in digital musical instruments, networked music performance and other digital experiences. Users can easily monitor and hack what multiple devices are doing on the fly, enhancing their ability to perform “exploratory search” in a creative workflow. We present two creative case studies using the system: the creation of a dance performance and the creation of a distributed musical installation. Analysing different activities within the production process, with a particular focus on the trade-off between more creative exploratory tasks and more standard configuring and problem-solving tasks, we show how the system supports creative exploratory search for multiple networked devices, and consider design principles that could advance this support.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126796654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Taptop, Armtop, Blowtop: Evolving the Physical Laptop Instrument","authors":"Doga Cavdir, Juan Sierra, Ge Wang","doi":"10.5281/zenodo.3672864","DOIUrl":"https://doi.org/10.5281/zenodo.3672864","url":null,"abstract":"This research represents an evolution and evaluation of the embodied physical laptop instruments. Specifically, these are instruments that are physical in that they use bodily interaction, take advantage of the physical affordances of the laptop. They are embodied in the sense that instruments are played in such ways where the sound is embedded to be close to the instrument. Three distinct laptop instruments, Taptop, Armtop, and Blowtop, are introduced in this paper. We discuss the integrity of the design process with composing for laptop instruments and performing with them. In this process, our aim is to blur the boundaries of the composer and designer/engineer roles. How the physicality is Licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0). Copyright remains with the author(s). NIME’19, June 3-6, 2019, Federal University of Rio Grande do Sul, Porto Alegre, Brazil. achieved by leveraging musical gestures gained through traditional instrument practice is studied, as well as those inspired by body gestures. We aim to explore how using such interaction methods affects the communication between the ensemble and the audience. An aesthetic-first qualitative evaluation of these interfaces is discussed, through works and performances crafted specifically for these instruments and presented in the concert setting of the laptop orchestra. In so doing, we reflect on how such physical, embodied instrument design practices can inform a different kind of expressive and performance mindset.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115032795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiao Xiao, Grégoire Locqueville, C. d’Alessandro, B. Doval
{"title":"T-Voks: the Singing and Speaking Theremin","authors":"Xiao Xiao, Grégoire Locqueville, C. d’Alessandro, B. Doval","doi":"10.5281/zenodo.3672886","DOIUrl":"https://doi.org/10.5281/zenodo.3672886","url":null,"abstract":"T-Voks is an augmented theremin that controls Voks, a per-formative singing synthesizer. Originally developed for control with a graphic tablet interface, Voks allows for real-time pitch and time scaling, vocal effort modification and syllable sequencing for pre-recorded voice utterances. For T-Voks the theremin's frequency antenna modifies the output pitch of the target utterance while the amplitude antenna controls not only volume as usual but also voice quality and vocal effort. Syllabic sequencing is handled by an additional pressure sensor attached to the player's volume-control hand. This paper presents the system architecture of T-Voks, the preparation procedure for a song, playing gestures, and practice techniques, along with musical and poetic examples across four different languages and styles.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"8 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129118185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}