Laurel Pardue, K. Buys, Dan Overholt, Andrew Mcpherson, Mike Edinger
{"title":"Separating sound from source: sonic transformation of the violin through electrodynamic pickups and acoustic actuation","authors":"Laurel Pardue, K. Buys, Dan Overholt, Andrew Mcpherson, Mike Edinger","doi":"10.5281/zenodo.3672958","DOIUrl":"https://doi.org/10.5281/zenodo.3672958","url":null,"abstract":"When designing an augmented acoustic instrument, it is often of interest to retain an instrument’s sound quality and nuanced response while leveraging the richness of digital synthesis. Digital audio has traditionally been generated through speakers, separating sound generation from the instrument itself, or by adding an actuator within the instrument’s resonating body, imparting new sounds along with the original. We offer a third option, isolating the playing interface from the actuated resonating body, allowing us to rewrite the relationship between performance action and sound result while retaining the general form and feel of the acoustic instrument. We present a hybrid acoustic-electronic violin based on a stick-body electric violin and an electrodynamic polyphonic pick-up capturing individual string displacements. A conventional violin body acts as the resonator, actuated using digitally altered audio of the string inputs. By attaching the electric violin above the body with acoustic isolation, we retain the physical playing experience of a normal violin along with some of the acoustic filtering and radiation of a traditional build. We propose the use of the hybrid instrument with digitally automated pitch and tone correction to make an easy violin for use as a potential motivational tool for beginning violinists.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115308502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anna Xambó, Gerard Roma, Alexander Lerch, M. Barthet, György Fazekas
{"title":"Live Repurposing of Sounds: MIR Explorations with Personal and Crowdsourced Databases","authors":"Anna Xambó, Gerard Roma, Alexander Lerch, M. Barthet, György Fazekas","doi":"10.5281/zenodo.1302625","DOIUrl":"https://doi.org/10.5281/zenodo.1302625","url":null,"abstract":"The recent increase in the accessibility and size of personal and crowdsourced digital sound collections brought about a valuable resource for music creation. Finding and retrieving relevant sounds in performance leads to challenges that can be approached using music information retrieval (MIR). In this paper, we explore the use of MIR to retrieve and repurpose sounds in musical live coding. We present a live coding system built on SuperCollider enabling the use of audio content from online Creative Commons (CC) sound databases such as Freesound or personal sound databases. The novelty of our approach lies in exploiting high-level MIR methods (e.g., query by pitch or rhythmic cues) using live coding techniques applied to sounds. We demonstrate its potential through the reflection of an illustrative case study and the feedback from four expert users. The users tried the system with either a personal database or a crowdsourced database and reported its potential in facilitating tailorability of the tool to their own creative workflows.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115512284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"All the Noises: Hijacking Listening Machines for Performative Research","authors":"J. Bowers, Owen Green","doi":"10.5281/zenodo.1302699","DOIUrl":"https://doi.org/10.5281/zenodo.1302699","url":null,"abstract":"Research into machine listening has intensified in recent years creating a variety of techniques for recognising musical features suitable, for example, in musicological analysis or commercial application in song recognition. Within NIME, several projects exist seeking to make these techniques useful in real-time music making. However, we debate whether the functionally-oriented approaches inherited from engineering domains that much machine listening research manifests is fully suited to the exploratory, divergent, boundary-stretching, uncertainty-seeking, playful and irreverent orientations of many artists. To explore this, we engaged in a concerted collaborative design exercise in which many different listening algorithms were implemented and presented with input which challenged their customary range of application and the implicit norms of musicality which research can take for granted. An immersive 3D spatialised multichannel environment was created in which the algorithms could be explored in a hybrid installation/performance/lecture form of research presentation. The paper closes with reflections on the creative value of ‘hijacking’ formal approaches into deviant contexts, the typically undocumented practical know-how required to make algorithms work, the productivity of a playfully irreverent relationship between engineering and artistic approaches to NIME, and a sketch of a sonocybernetic aesthetics for our work.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126719525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Aphysical Unmodeling Instrument: Sound Installation that Re-Physicalizes a Meta-Wind-Instrument Physical Model, Whirlwind","authors":"Tomoya Matsuura, Kazuhiro Jo","doi":"10.5281/zenodo.1302663","DOIUrl":"https://doi.org/10.5281/zenodo.1302663","url":null,"abstract":"Aphysical Unmodeling Instrument is the title of a sound installation that re-physicalizes the Whirlwind meta-windinstrument physical model. We re-implemented the Whirlwind by using real-world physical objects to comprise a sound installation. The sound propagation between a speaker and microphone was used as the delay, and a paper cylinder was employed as the resonator. This paper explains the concept and implementation of this work at the 2017 HANARART exhibition. We examine the characteristics of the work, address its limitations, and discuss the possibility of its interpretation by means of a “re-physicalization.”","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122115586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Co-Tuning Virtual-Acoustic Performance Ecosystems: observations on the development of skill and style in the study of musician-instrument relationships","authors":"P. Stapleton, M. V. Walstijn, Sandor Mehes","doi":"10.5281/zenodo.1302593","DOIUrl":"https://doi.org/10.5281/zenodo.1302593","url":null,"abstract":"In this paper we report preliminary observations from an ongoing study into how musicians explore and adapt to the parameter space of a virtual-acoustic string bridge plate instrument. These observations inform (and are informed by) a wider approach to understanding the development of skill and style in interactions between musicians and musical instruments. We discuss a performance-driven ecosystemic approach to studying musical relationships, drawing on arguments from the literature which emphasise the need to go beyond simplistic notions of control and usability when assessing exploratory and performatory musical interactions. Lastly, we focus on processes of perceptual learning and co-tuning between musician and instrument, and how these activities may contribute to the emergence of personal style as a hallmark of skilful music-making. ABSTRACT This paper provides a sample of a L A TEX document for the NIME conference series. It conforms, somewhat loosely, to the formatting guidelines for ACM SIG Proceedings. It is an alternate style which produces a tighter-looking paper and was designed in response to concerns expressed, by authors, over page-budgets. It complements the document Author’s (Alternate) Guide to Preparing ACM SIG Proceedings Us- ing L A TEX 2 ✏ and BibTEX . This source file has been written with the intention of being compiled under L A TEX2 ✏ and BibTeX.Tomake best use of this sample document, run it through L A TEX and BibTeX, and compare this source code with your compiled PDF file. A compiled PDF version is available to help you with the ‘look and feel.’ The paper submit- ted to the NIME conference must be stored in an A4-sized PDF file, so North Americans should take care not to inadvertently generate letter paper-sized PDF files. This paper template should prevent that from happening if the pdflatex program is used to generate the PDF file. The abstract should preferably be between 100 and 200 words.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126616644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cagan Arslan, Florent Berthaut, J. Martinet, Ioan Marius Bilasco, L. Grisoni
{"title":"The Phone with the Flow: Combining Touch + Optical Flow in Mobile Instruments","authors":"Cagan Arslan, Florent Berthaut, J. Martinet, Ioan Marius Bilasco, L. Grisoni","doi":"10.5281/zenodo.1302709","DOIUrl":"https://doi.org/10.5281/zenodo.1302709","url":null,"abstract":"Mobile devices have been a promising platform for musical performance thanks to the various sensors readily available on board. In particular, mobile cameras can provide rich input as they can capture a wide variety of user gestures or environment dynamics. However, this raw camera input only provides continuous parameters and requires expensive computation. In this paper, we propose combining camera based motion/gesture input with the touch input, in order to filter movement information both temporally and spatially , thus increasing expressiveness while reducing computation time. We present a design space which demonstrates the diversity of interactions that our technique enables. We also report the results of a user study in which we observe how musicians appropriate the interaction space with an example instrument.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126275308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CTRL: A Flexible, Precision Interface for Analog Synthesis","authors":"J. Harding, R. Graham, E. Park","doi":"10.5281/zenodo.1302563","DOIUrl":"https://doi.org/10.5281/zenodo.1302563","url":null,"abstract":"This paper provides a new interface for the production and distribution of high resolution analog control signals, particularly aimed toward the control of analog modular synthesisers. Control Voltage/Gate interfaces generate Control Voltage (CV) and Gate Voltage (Gate) as a means of controlling note pitch and length respectively, and have been with us since 1986 [3]. The authors provide a unique custom CV/Gate interface and dedicated communication protocol which leverages standard USB Serial functionality and enables connectivity over a plethora of computing systems, including embedded devices such as the Raspberry Pi and ARM based devices including widely available Android TV Boxes. We provide a general overview of the unique hardware and communication protocol developments followed by use case examples toward tuning and embedded platforms, leveraging softwares ranging from Pure Data (Pd), Max, and Max for Live (M4L).","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121474455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Myo Mapper: a Myo armband to OSC mapper","authors":"Balandino Di Donato, J. Bullock, Atau Tanaka","doi":"10.5281/zenodo.1302705","DOIUrl":"https://doi.org/10.5281/zenodo.1302705","url":null,"abstract":"Myo Mapper is a free and open source cross-platform application to map data from the gestural device Myo armband into Open Sound Control (OSC) messages. It represents a `quick and easy' solution for exploring the Myo's potential for realising new interfaces for musical expression. Together with details of the software, this paper reports some applications in which Myo Mapper has been successfully used and a qualitative evaluation. We then proposed guidelines for using Myo data in interactive artworks based on insight gained from the works described and the evaluation. Findings show that Myo Mapper empowers artists and non-skilled developers to easily take advantage of Myo data high-level features for realising interactive artistic works. It also facilitates the recognition of poses and gestures beyond those included with the product by using third-party interactive machine learning software.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114323414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Crafting Digital Musical Instruments: An Exploratory Workshop Study","authors":"Jack Armitage, Andrew Mcpherson","doi":"10.5281/zenodo.1302583","DOIUrl":"https://doi.org/10.5281/zenodo.1302583","url":null,"abstract":"In digital musical instrument design, di ff erent tools and methods o ff er a variety of approaches for constraining the exploration of musical gestures and sounds. Toolkits made of modular components usefully constrain exploration towards simple, quick and functional combinations, and meth-ods such as sketching and model-making alternatively allow imagination and narrative to guide exploration. In this work we sought to investigate a context where these approaches to exploration were combined. We designed a craft work-shop for 20 musical instrument designers, where groups were given the same partly-finished instrument to craft for one hour with raw materials, and though the task was open ended, they were prompted to focus on subtle details that might distinguish their instruments. Despite the prompt the groups diverged dramatically in intent and style, and generated gestural language rapidly and flexibly. By the end, each group had developed a distinctive approach to constraint, exploratory style, collaboration and interpretation of the instrument and workshop materials. We reflect on this outcome to discuss advantages and disadvantages to integrating digital musical instrument design tools and methods, and how to further investigate and extend this approach.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129595025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Risky business: Disfluency as a design strategy","authors":"S. Bin, N. Bryan-Kinns, Ap Mcpherson","doi":"10.5281/zenodo.1302675","DOIUrl":"https://doi.org/10.5281/zenodo.1302675","url":null,"abstract":"This paper presents a study examining the effects of disfluent design on audience perception of digital musical instrument (DMI) performance. Disfluency, defined as a barrier to effortless cognitive processing, has been shown to generate better results in some contexts as it engages higher levels of cognition. We were motivated to determine if disfluent design in a DMI would result in a risk state that audiences would be able to perceive, and if this would have any effect on their evaluation of the performance. A DMI was produced that incorporated a disfluent characteristic: It would turn itself off if not constantly moved. Six physically identical instruments were produced, each in one of three versions: Control (no disfluent characteristics), mild disfluency (turned itself off slowly), and heightened disfluency (turned itself off more quickly). 6 percussionists each performed on one instrument for a live audience (N=31), and data was collected in the form of real-time feedback (via a mobile phone app), and post-hoc surveys. Though there was little difference in ratings of enjoyment between the versions of the instrument, the real-time and qualitative data suggest that disfluent behaviour in a DMI may be a way for audiences to perceive and appreciate performer skill.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114172010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}