{"title":"On the Inclusivity of Constraint: Creative Appropriation in Instruments for Neurodiverse Children and Young People","authors":"Joe E. Wright, J. Dooley","doi":"10.5281/zenodo.3672908","DOIUrl":"https://doi.org/10.5281/zenodo.3672908","url":null,"abstract":"Taking inspiration from research into deliberately constrained musical technologies and the emergence of neurodiverse, child-led musical groups such as the Artism Ensemble, the interplay between design-constraints, inclusivity and appro- priation is explored. A small scale review covers systems from two prominent UK-based companies, and two itera- tions of a new prototype system that were developed in collaboration with a small group of young people on the autistic spectrum. Amongst these technologies, the aspects of musical experience that are made accessible differ with re- spect to the extent and nature of each system’s constraints. It is argued that the design-constraints of the new prototype system facilitated the diverse playing styles and techniques observed during its development. Based on these obser- vations, we propose that deliberately constrained musical instruments may be one way of providing more opportuni- ties for the emergence of personal practices and preferences in neurodiverse groups of children and young people, and that this is a fitting subject for further research.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123680562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrei Faitas, S. Baumann, Torgrim Rudland Næss, J. Tørresen, Charles Patrick Martin
{"title":"Generating Convincing Harmony Parts with Simple Long Short-Term Memory Networks","authors":"Andrei Faitas, S. Baumann, Torgrim Rudland Næss, J. Tørresen, Charles Patrick Martin","doi":"10.5281/zenodo.3672980","DOIUrl":"https://doi.org/10.5281/zenodo.3672980","url":null,"abstract":"Generating convincing music via deep neural networks is a challenging problem that shows promise for many applications including interactive musical creation. One part of this challenge is the problem of generating convincing accompaniment parts to a given melody, as could be used in an automatic accompaniment system. Despite much progress in this area, systems that can automatically learn to generate interesting and harmonically plausible accompaniments remain somewhat elusive. In this paper we explore systems where a user provides a sequence of notes, and a neural network model responds with an accompanying sequence of equal length. We consider two popular sequenceto-sequence models; one featuring standard unidirectional long short-term memory (LSTM) architecture, and the other featuring bidirectional LSTM. These are evaluated and compared via a qualitative study that features 106 respondents listening to eight random samples from our set of generated music, as well as two human samples. From the results we see a preference for the sequences generated by the bidirectional model as well as an indication that these sequences sound more human.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130181196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Vrengt: A Shared Body-Machine Instrument for Music-Dance Performance","authors":"Çağrı Erdem, K. Schia, A. Jensenius","doi":"10.5281/zenodo.3672917","DOIUrl":"https://doi.org/10.5281/zenodo.3672917","url":null,"abstract":"This paper describes the process of developing a shared instrument for music-dance performance, with a particular focus on exploring the boundaries between standstill vs motion, and silence vs sound. The piece Vrengt grew from the idea of enabling a true partnership between a musician and a dancer, developing an instrument that would allow for active co-performance. Using a participatory design approach, we worked with sonification as a tool for systematically exploring the dancer's bodily expressions. The exploration used a \"spatiotemporal matrix\", with a particular focus on sonic microinteraction. In the final performance, two Myo armbands were used for capturing muscle activity of the arm and leg of the dancer, together with a wireless headset microphone capturing the sound of breathing. In the paper we reflect on multi-user instrument paradigms, discuss our approach to creating a shared instrument using sonification as a tool for the sound design, and reflect on the performers' subjective evaluation of the instrument.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128501808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"OtoKin: Mapping for Sound Space Exploration through Dance Improvisation","authors":"Palle Dahlstedt, Ami Skånberg Dahlstedt","doi":"10.5281/zenodo.3672906","DOIUrl":"https://doi.org/10.5281/zenodo.3672906","url":null,"abstract":"","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133692464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Modular Backward Evolution - Why to Use Outdated Technologies","authors":"Beat Rossmy, A. Wiethoff","doi":"10.5281/zenodo.3672988","DOIUrl":"https://doi.org/10.5281/zenodo.3672988","url":null,"abstract":"In this paper we draw a picture that captures the increasing interest in the format of modular synthesizers today. We therefore provide a historical summary, which includes the origins, the fall and the rediscovery of that technology. Further an empirical analysis is performed based on statements given by artists and manufacturers taken from published interviews.These statements were aggregated, ob-jectified and later reviewed by an expert group consisting of modular synthesizer vendors. Their responses provide the basis for the discussion on how emerging trends in synthe-sizer interface design reveal challenges and opportunities for the NIME community.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115318904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Rebuilding and Reinterpreting a Digital Musical Instrument - The Sponge","authors":"A. Tom, H. Venkatesan, Ivan Franco, M. Wanderley","doi":"10.5281/zenodo.3672858","DOIUrl":"https://doi.org/10.5281/zenodo.3672858","url":null,"abstract":"Although several Digital Musical Instruments (DMIs) have been presented at NIME, very few of them remain accessible to the community. Rebuilding a DMI is often a necessary step to allow for performance with NIMEs. Rebuilding a DMI exactly similar to its original, however, might not be possible due to technology obsolescence, lack of documentation or other reasons. It might then be interesting to re-interpret a DMI and build an instrument inspired by the original one, creating novel performance opportunities. This paper presents the challenges and approaches involved in rebuilding and re-interpreting an existing DMI, The Sponge by Martin Marier. The rebuilt versions make use of newer/improved technology and customized design aspects like addition of vibrotactile feedback and implementation of different mapping strategies. It also discusses the implications of embedding sound synthesis within the DMI, by using the Prynth framework and further presents a comparison between this approach and the more traditional ground-up approach. As a result of the evaluation and comparison of the two rebuilt DMIs, we present a third version which combines the benefits and discuss performance issues with these devices.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"432 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116574187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anna Xambó, Sigurd Saue, A. Jensenius, R. Støckert, Øyvind Brandtsegg
{"title":"NIME Prototyping in Teams: A Participatory Approach to Teaching Physical Computing","authors":"Anna Xambó, Sigurd Saue, A. Jensenius, R. Støckert, Øyvind Brandtsegg","doi":"10.5281/zenodo.3672932","DOIUrl":"https://doi.org/10.5281/zenodo.3672932","url":null,"abstract":"In this paper, we present a workshop of physical computing applied to NIME design based on science, technology, engineering, arts, and mathematics (STEAM) education. The workshop is designed for master students with multidisciplinary backgrounds. They are encouraged to work in teams from two university campuses remotely connected through a portal space. The components of the workshop are prototyping, music improvisation and reflective practice. We report the results of this course, which show a positive impact on the students’ confidence in prototyping and intention to continue in STEM fields. We also present the challenges and lessons learned on how to improve the teaching of hybrid technologies and programming skills in an interdisciplinary context across two locations, with the aim of satisfying both beginners and experts. We conclude with a broader discussion on how these new pedagogical perspectives can improve NIME-related courses.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122661414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Grain Prism: Hieroglyphic Interface for Granular Sampling","authors":"Gabriela Bila Advincula, D. D. Haddad, K. Larson","doi":"10.5281/zenodo.3672960","DOIUrl":"https://doi.org/10.5281/zenodo.3672960","url":null,"abstract":"This paper introduces the Grain Prism, a hybrid musical instrument that combines granular synthesis and live audio recordings. Presented in a capacitive touch interface, users are invited to create experimental sound textures with their own recordings. The interface’s touch plates are introduced within a series of obscure glyphs, instigating the player to decipher the hidden sonic messages. This way, the mysterious interface opens space to aleatoricism in the act of conjuring sound, and therefore the discovery of “happy accidents” in making electronic music.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124808670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"From Mondrian to Modular Synth: Rendering NIME using Generative Adversarial Networks","authors":"Akito van Troyer, Rébecca Kleinberger","doi":"10.5281/zenodo.3672956","DOIUrl":"https://doi.org/10.5281/zenodo.3672956","url":null,"abstract":"This paper explores the potential of image-to-image translation techniques in aiding the design of new hardware-based musical interfaces such as MIDI keyboard, grid-based controller, drum machine, and analog modular synthesizers. We collected an extensive image database of such interfaces and implemented image-to-image translation techniques using variants of Generative Adversarial Networks. The created models learn the mapping between input and output images using a training set of either paired or unpaired images. We qualitatively assess the visual outcomes based on three image-to-image translation models: reconstructing interfaces from edge maps, and collection style transfers based on two image sets: visuals of mosaic tile patterns and geometric abstract two-dimensional arts. This paper aims to demonstrate that synthesizing interface layouts based on image-to-image translation techniques can yield insights for researchers, musicians, music technology industrial designers, and the broader NIME community.","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114290049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ha Dou Ken Music: Different mappings to play music with joysticks","authors":"Gabriel Lopes Rocha, J. Araújo, F. Schiavoni","doi":"10.5281/zenodo.3672872","DOIUrl":"https://doi.org/10.5281/zenodo.3672872","url":null,"abstract":"Due to video game controls great presence in popular culture and its ease of access, even people who are not in the habit of playing electronic games possibly interacted with this kind of interface once in a lifetime. Thus, gestures like pressing a sequence of buttons, pressing them simultaneously or sliding your fingers through the control can be mapped for musical creation. This work aims the elaboration of a strategy in which several gestures performed in a joystick control can influence one or several parameters of the sound synthesis, making a mapping denominated many to many. Buttons combinations used to perform game actions that are common in fighting games, like Street Fighter, were mapped to the synthesizer to create a music. Experiments show that this mapping is capable of influencing the musical expression of a DMI making it closer to an acoustic","PeriodicalId":161317,"journal":{"name":"New Interfaces for Musical Expression","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122327703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}